[Lldb-commits] [PATCH] D32930: New framework for lldb client-server communication tests.

Zachary Turner via lldb-commits lldb-commits at lists.llvm.org
Wed May 31 14:16:45 PDT 2017


And once you start annotating source code with statements, things become
even more concise.  For example, you could write the above test:

# input_file: foo.cpp
# int main(int argc, char **argv)  {  // bp1 = BREAK_HERE
#   return 0;    // bp2 = BREAK_HERE
# }
# // VERIFY(bp1.hits == 1)
# // VERIFY(bp2.hits == 1)

You can have cross references, conditionals, variables, etc.  The only real
limit is your imagination.  Anyone can mentally digest this entire test in
about 5 seconds without having to read through multiple files and groking
Python code and an a separate API.

On Wed, May 31, 2017 at 2:02 PM Zachary Turner <zturner at google.com> wrote:

> This hypothetical DSL could still run SB API statements.  I don't
> necessarily think it *should*, since what the SB API does and what the
> commands do are very different (for example SBTarget.Create() does
> something entirely different than "target create", so you're not actually
> testing what the debugger does when you create a target, you're only
> testing what this API does that is not really used outside of IDE
> integration and scripting scenarios.  But it could.
>
> I mean in theory you could have a test that looks like:
>
> # input_file: foo.cpp
> # int main(int argc, char **argv) {
> #    return 0;
> # }
> bp1 = set_breakpoint(foo.cpp:2)
> bp2 = set_breakpoint(foo.cpp at _main)
> run
> expect(bp1.hits == 1)
> expect(bp2.hits == 1)
>
> A test consisting of 1 file and 9 lines is a lot easier to understand to
> me than a test consisting of a python file, a Makefile, and a source file,
> where the python file is a minimum of 20 lines with complicated setup and
> initialization steps that are mostly boilerplate.
>
> Note that I'm not necessarily even advocating for something that looks
> like the above, it just is the first thing that came to mind.
>
> As a first pass, you could literally just have lit be the thing that walks
> the directory tree scanning for test files, then invoking Make and running
> the existing .py files.  That would already be a big improvement.  1000
> lines of python code that many people understand and actively maintain /
> improve is better than 2000 lines of python code that nobody understands
> and nobody really maintains / improves.
>
> On Wed, May 31, 2017 at 1:45 PM Jim Ingham <jingham at apple.com> wrote:
>
>>
>> > On May 31, 2017, at 12:22 PM, Zachary Turner <zturner at google.com>
>> wrote:
>> >
>> > Writing tests that are
>> >
>> > A) as easy as possible to write and understand
>>
>> I've never understood why you consider the tests in our test case hard to
>> write or understand.  Till I added the example directory, you had to go
>> search around for an exemplar to work from, which was a bit of a pain.  But
>> now you copy whichever of the examples you want into place, and you have a
>> process stopped somewhere and ready to poke at.  You do need to learn some
>> SB API's but on that see below...
>>
>> > B) built on a multiprocessing infrastructure that is battle tested and
>> known to not be flaky
>>
>> We haven't come across failures in the tests caused by the test
>> infrastructure in a while now.  The multiprocessing flakiness I've seen is
>> because you are trying to run many tests in parallel, and some of those
>> tests require complex setup and that times out when machines are heavily
>> loaded.  And for the most part we solve that by running the tests that
>> require timeouts serially.  That seems a solved problem as well.
>>
>> > C) Familiar to other llvm developers, so as to not discourage subject
>> matter experts from other areas to make relevant improvements to LLDB
>> >
>>
>> If I were a developer coming new to lldb now, and had to write a test
>> case, I would have to learn something about how the SB API's work (that and
>> a little Python.)  The test case part is pretty trivial, especially when
>> copying from the sample tests.  Learning the SB API's is a bit of a pain,
>> but having done that the next time I wanted to write some little breakpoint
>> command to do something clever when doing my own debugging, I now have some
>> skills at my fingertips that are going to come in really handy.
>>
>> If we change over the tests so what I did instead was learn some DSL to
>> describe lldb tests (which would probably not be entirely trivial, and
>> likely not much easier than the SB API's which are very straight-forward) I
>> have learned nothing useful outside of writing tests.  Not sure how that's
>> a benefit.
>>
>> For the tests that can be decoupled from running processes, that have
>> simple data in and out and don't require much other state - again the frame
>> unwinder tests are a perfect example - writing separable tests for that
>> seems great.  But in those cases you are generally poking API's and I don't
>> see how coming up with some DSL that is general enough to represent this is
>> any advantage over how those tests are currently written with gtest.
>>
>> >
>> > For example, I assume you are on board at least to some extent with
>> lldbinline tests.
>>
>> The part of the inline tests that allow you to express the tests you want
>> to run next to the code you stop in is fine till it doesn't work - at which
>> point debugging them becomes a real PITN.  But they still use the common
>> currencies of the command line or SB API's to actually perform the tests.
>> I'd be less interested if this was some special purpose language of our own
>> devising.  I can't at all see wanting to learn that if the only use it
>> would be to me is for writing tests.
>>
>> > After all, they're simpler than the other style of test. Now, suppose
>> there were some hypothetical DSL that allowed every test to be an inline
>> test but still test equally complicated scenarios in half the code. Then
>> suppose it also ran on a more robust multiprocessing infrastructure than
>> dotest.py. That's all we're really talking about
>>
>> Thanks for the clarification.
>>
>> Jim
>>
>>
>>
>> > On Wed, May 31, 2017 at 12:06 PM Jim Ingham via lldb-commits <
>> lldb-commits at lists.llvm.org> wrote:
>> > Before we get past "it's hard" to "just to do it", it would help me to
>> be clear first on what you are actually trying to achieve with this
>> proposal.  It's not clear to me what problem are people trying to solve
>> here?  If it is writing tests for the decomposable parts of lldb - like the
>> tests Jason wrote for the unwinder recently - why was the gtest path not a
>> good way to do this?  If it is rewriting the parts of the testsuite that
>> exercise the debugger on live targets what would a lit-based suite do that
>> we can't do with the current testsuite?  Or maybe you are thinking of some
>> other good I'm missing?
>> >
>> > Jim
>> >
>> >
>> > > On May 31, 2017, at 10:37 AM, Zachary Turner via Phabricator via
>> lldb-commits <lldb-commits at lists.llvm.org> wrote:
>> > >
>> > > zturner added a comment.
>> > >
>> > > In https://reviews.llvm.org/D32930#767820, @beanz wrote:
>> > >
>> > >> One small comment below. In general I agree with the thoughts here,
>> and I think that this is a huge step forward for testing the debug server
>> components.
>> > >>
>> > >> I also agree with Zachary in principal that it would be nice to come
>> up with lit-based test harnesses for more parts of LLDB, although I'm
>> skeptical about whether or not that is actually the best way to test the
>> debug server pieces. Either way, this is a huge step forward from what we
>> have today, so we should go with it.
>> > >
>> > >
>> > > It would be nice if, at some point, we could move past "It's hard"
>> and start getting into the details of what's hard about it.  (Note this
>> goes for LLDB client as well as lldb server).  I see a lot of general
>> hand-wavy comments about how conditionals are needed, or variables, etc,
>> but that doesn't really do anything to convince me that it's hard.  After
>> all, we wrote a C++ compiler!  And I'm pretty sure that the compiler-rt and
>> sanitizer test suite is just as complicated as, if not more complicated
>> than any hypothetical lldb test suite.  And those have been solved.
>> > >
>> > > What *would* help would be to ignore how difficult it may or may not
>> be, and just take a couple of tests and re-write them in some DSL that you
>> invent specifically for this purpose that is as concise as possible yet as
>> expressive as you need, and we go from there.  I did this with a couple of
>> fairly hairy tests a few months ago and it didn't seem that bad to me.
>> > >
>> > > The thing is, the set of people who are experts on the client side of
>> LLDB and the set of people who are experts on the client side of
>> LLVM/lit/etc are mostly disjoint, so nothing is ever going to happen
>> without some sort of collaboration.  For example, I'm more than willing to
>> help out writing the lit bits of it, but I would need a specification of
>> what the test language needs to look like to support all of the use cases.
>> And someone else has to provide that since we want to get the design
>> right.  Even if implementing the language is hard, deciding what it needs
>> to look like is supposed to be the easy part!
>> > >
>> > >
>> > > https://reviews.llvm.org/D32930
>> > >
>> > >
>> > >
>> > > _______________________________________________
>> > > lldb-commits mailing list
>> > > lldb-commits at lists.llvm.org
>> > > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits
>> >
>> > _______________________________________________
>> > lldb-commits mailing list
>> > lldb-commits at lists.llvm.org
>> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/lldb-commits/attachments/20170531/45c5a821/attachment-0001.html>


More information about the lldb-commits mailing list