[lldb-dev] Testing through api vs. commands

Zachary Turner via lldb-dev lldb-dev at lists.llvm.org
Thu Sep 17 10:52:49 PDT 2015


On Thu, Sep 17, 2015 at 10:39 AM <dawn at burble.org> wrote:

> On Thu, Sep 17, 2015 at 05:20:14PM +0000, Zachary Turner wrote:
> > > > a) they should be explicitly marked as interface tests.
> > > What's the decorator for this?
> >
> > There's not one currently.
>
> Will there be?
>
Whenever someone adds one :)  If you want to start doing this, it should be
fairly easy.  Even if the decorator doesn't do anything useful yet, you can
just put it in and start putting it on interface tests if you want to start
writing some.  I think earlier in the thread Jim agreed that it would be a
good idea to separate command and api tests at the file system level as
well.  So that's also something to think about if you plan to write some
tests like this.


>
> > > > d) Results of these interface tests should also not be *verified* by
> the
> > > > use of self.expect, but itself through the API.  (Relying on the
> text to
> > > be
> > > > formatted a specific way is obviously problematic, as opposed to just
> > > > verifying the actual state of the debugger)
> > >
> > > How do you rely "on the text to be formatted a specific way"?
> >
> > Quick example: One of the most common patterns in the test are to do some
> > things, print a backtrace, and run a regex against the output of the
> > backtrace.   I've found at least 3 different examples of how this causes
> > things to fail on Windows:
>
> Oh, I misunderstood your point.  You're saying the existing tests rely
> "on the text to be formatted a specific way".  Right.
>
> I was asking how you *could* make sure the output was formatted a
> specific way using the Python API.
>

I'm actually not that convinced it's super useful.  Other people might
disagree, but I'll let them chime in.  Personally I think it's much more
useful to verify that typing a command with certain options affects the
state of the debugger in the expected way.  Right now the way we verify
this is by looking at the formatted output.  But another way (albeit much
more work to get going) would be to have the command objects and the SB
classes both be backed by the same classes that implement the commands (I
admit I'm handwaving a little here)

This way your core functionality tests uses the python api and verifies the
state of the debugger using the SB API, and your command tests simply
process a command and get back some sort of configuration structure, and
all you have to do is verify that the configuration structure is filled out
correctly.  Since the core functionality tests already verified that the
functionality works if passed an appropriate configuration structure, you
don't need the command tests to actually do any work or manipulate the
debugger.

Anyway, this is just a high level idea that probably needs refinement.  I'm
mostly just brainstorming.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/lldb-dev/attachments/20150917/f7dddbd5/attachment-0001.html>


More information about the lldb-dev mailing list