[Lldb-commits] [PATCH] D46005: [test] Add a utility for dumping all tests in the dotest suite

Pavel Labath via lldb-commits lldb-commits at lists.llvm.org
Wed May 2 10:19:08 PDT 2018

That's kind of true, but also not really.. :)

If I were writing this test in dotest style (*), I would probably write
these as multiple **assertions** within a single test. So it would be
something like:
def single_test():
   module = prepare()
   assertEquals(module.operation1(), ["result1a", "result1b", ...])
   assertEquals(module.operation2(), ["result2a", "result2b", ...])

and *not*:
def test_1():
   module = prepare()
   assertEquals(module.operation1(), [...])

def test_2():
   module = prepare()
   assertEquals(module.operation2(), [...])

I think the real question here is whether we want to adjust the lit notion
of a "test" to match what the developer thinks of as a "test", or do we
want to do it the other way around. You seem to want to latter, I am
proposing the former (at least for the time being). Right now, the two
things are disconnected, which is not an ideal situation. Among other
things, this manifests in the decorators which skip/xfail/... tests
operating on a different level than lit, which in turn means it is not
possible to map the test results onto lit states.

(*) I have though about making this a unit test. The fact that I need to
test some dozen or so independent entry points makes this sound like a good
idea. (Right now I had to concoct a bunch of command line arguments to
invoke the appropriate APIs, and I still don't have all the entry points
covered.) The reason I chose not to do that is because of invoking
clang/llvm-mc/etc. from a gtest is a bit.. messy..

On Tue, 1 May 2018 at 18:00, Zachary Turner <zturner at google.com> wrote:

> On Wed, Apr 25, 2018 at 9:19 AM Adrian Prantl <aprantl at apple.com> wrote:

>> On Apr 25, 2018, at 9:08 AM, Zachary Turner <zturner at google.com> wrote:

>> But is there a reason why that is more valuable with LLDB than it is
with LLVM?  In LLVM when a test fails it stops and doesn't run subsequent
run lines.  So you have the same issue there.

>> That's not a good analogy. Multiple RUN lines don't necessarily mean
separate tests, they mean separate commands being executed that may depend
on each other. As Pavel pointed out before, this would be closer to how
gtests behave.

>> -- adrian

> The analogy I was going for is better illustrated in
https://reviews.llvm.org/D46318.  Here we have multiple distinct tests that
are similar enough to be in the same file.  lit runs them sequentially and
stops as soon as any of the first one fails.

More information about the lldb-commits mailing list