[lldb-dev] LLDB tests

Jim Ingham via lldb-dev lldb-dev at lists.llvm.org
Mon Jul 24 18:39:18 PDT 2017


> On Jul 24, 2017, at 3:03 AM, Steve Trotter via lldb-dev <lldb-dev at lists.llvm.org> wrote:
> 
> Hi all,
> 
> I'm fairly new to LLVM and LLDB, I became interested in this project about 3 months back and I'm hoping to be able to contribute to improving LLDB in time. I've been trying to get to grips with the code and have been looking into the tests as a rough guide to how things work, however I have some questions about the test suites in LLDB.

Welcome!

> 
> It seems to me that we essentially have tests ran by the LIT runner from LLVM core and tests ran by an LLDB specific python script `dotest.py`. I notice that on the test page for LLDB they refer to the `dotest.py` tests ran by a `ninja --check-lldb` but not the latter. I also notice in an email titled "lldb-server tests" from Paval Labath on 15th May 2017 suggests that the plan long term is to be to move purely to using LIT style testing. Is this correct or have I misunderstood?

The discussion in that thread was about tests for lldb-server.

There has been some discussion about using the LIT framework rather than the current unittest based runner for actually running the lldb testsuite tests, replacing the runner, but using the same test code (all the test .py files).  The current test suite test format has the advantage that it exercises lldb's exported API set extensively.  There are a number of clients of this (actually given that it's what Xcode uses, the vast majority of lldb users use it through the SB API's), and it is a core part of lldb for users as well so the more testing of that we get the better.  It is also a powerful API for driving lldb - and writing tests using it is a good way to see whether you have all the affordances you need in the API - plus you get the results in a natural structured form that is easy to validate.  So this style of test is going to stay around whatever else gets added to the lldb testing efforts.

We are also looking to see where in lldb it would make sense to add more testing of restricted components of lldb, either using the existing gtest framework or coming up with some other framework if that is not sufficient.

> I did have a look in buildbot to see what tests are being used and I can only find the `dotest.py` style tests, however it's possible I've misunderstood something here, the "lldb-x86_64-ubuntu-14.04-cmake" is not easy to make sense of I'm afraid.

I think the bots are supposed to run the  googletests. For instance in the output here:

http://lab.llvm.org:8080/green/view/LLDB/job/lldb_coverage_xcode/141/consoleFull#-15141855949844eead-46b0-4a97-ae91-923a3407a4e3

If you scan down you'll see:

+ /Users/buildslave/jenkins/workspace/lldb_coverage_xcode/build/lldb/build/Debug/lldb-gtest --gtest_output=xml:/Users/buildslave/jenkins/workspace/lldb_coverage_xcode/build/lldb/build/gtest-results.xml
[==========] Running 279 tests from 26 test cases.
[----------] Global test environment set-up.
[----------] 9 tests from GoParserTest
[ RUN      ] GoParserTest.ParseBasicLiterals
...

Those are the gtests running.

> 
> Also there seems only to be one test for lldb-server in the LIT suite at present. Is there a reason for this at present, possibly along the lines of we're still waiting for the ability to run tests remotely using LIT as per this email thread? I couldn't find an obvious answer as to whether a design was agreed upon for this and/or the work completed, maybe it's an ongoing question still.

I haven't looked into the lldb-server tests much, Pavel would be better for that question.

> 
> Finally, I do see failures myself in both of these tests from the latest build. I do tend to limit it to compiling only for X86 target and I suspect this may be related, or possibly just something odd with my build system anyway. Obviously in an ideal world these tests should always pass but does anyone else have similar problems? I assume they tend to pass for the core developers as it seems to be fairly LLVM centric to ensure passing tests for new bits of work. I can send the outputs of the failing tests if it's thought useful.
> 

Since we depend on lots of different moving pieces it is not uncommon for failures to come and go.  The clang & llvm folks don't gate their changes on a clean lldb test run, so this often needs to get cleaned up after the fact.  For instance, some recent change to how DWARF was emitted started causing failures in TestWithModuleDebugging.py.  Adrian's in the process of fixing that.  There's also a test in the MI tool (TestMiVar.py) that has failed on and off for a while.  IIUC Sean is going to x-fail that one because it hasn't gotten any attention from the code owners of that area in a while.  Those are the only tests that I've seen failing recently.  If you are seeing other tests fail, please file a PR with the llvm bugzilla (bugs.llvm.org) and we'll take a look.

Jim





> Many thanks for your time,
> 
> Steve
> 
> _______________________________________________
> lldb-dev mailing list
> lldb-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev



More information about the lldb-dev mailing list