[Lldb-commits] [lldb] 3a6b60f - [lldb/Docs] Add some more info about the test suite structure
Jonas Devlieghere via lldb-commits
lldb-commits at lists.llvm.org
Thu Apr 16 10:18:29 PDT 2020
Author: Jonas Devlieghere
New Revision: 3a6b60fa623da6e311b61c812932917085067cd3
LOG: [lldb/Docs] Add some more info about the test suite structure
Expand on the structure of the LLDB test suite. So far this information
has been mostly "tribal knowledge". By writing it down I hope to make it
easier to understand our test suite for anyone that's new to the
diff --git a/lldb/docs/resources/test.rst b/lldb/docs/resources/test.rst
index dd40a1e51549..6f39a45d4b72 100644
@@ -1,27 +1,200 @@
+Test Suite Structure
The LLDB test suite consists of three
diff erent kinds of test:
-* Unit test. These are located under ``lldb/unittests`` and are written in C++
- using googletest.
-* Integration tests that test the debugger through the SB API. These are
- located under ``lldb/packages/Python/lldbsuite`` and are written in Python
- using ``dotest`` (LLDB's custom testing framework on top of unittest2).
-* Integration tests that test the debugger through the command line. These are
- located under `lldb/test/Shell` and are written in a shell-style format
- using FileCheck to verify its output.
+* **Unit tests**: written in C++ using the googletest unit testing library.
+* **Shell tests**: Integration tests that test the debugger through the command
+ line. These tests interact with the debugger either through the command line
+ driver or through ``lldb-test`` which is a tool that exposes the internal
+ data structures in an easy-to-parse way for testing. Most people will know
+ these as *lit tests* in LLVM, although lit is the test driver and ShellTest
+ is the test format that uses ``RUN:`` lines. `FileCheck
+ <https://llvm.org/docs/CommandGuide/FileCheck.html>`_ is used to verify
+ the output.
+* **API tests**: Integration tests that interact with the debugger through the
+ SB API. These are written in Python and use LLDB's ``dotest.py`` testing
+ framework on top of Python's `unittest2
+All three test suites use ``lit`` (`LLVM Integrated Tester
+<https://llvm.org/docs/CommandGuide/lit.html>`_ ) as the test driver. The test
+suites can be run as a whole or separately.
+Unit tests are located under ``lldb/unittests``. If it's possible to test
+something in isolation or as a single unit, you should make it a unit test.
+Often you need instances of the core objects such as a debugger, target or
+process, in order to test something meaningful. We already have a handful of
+tests that have the necessary boiler plate, but this is something we could
+abstract away and make it more user friendly.
+Shell tests are located under ``lldb/test/Shell``. These tests are generally
+built around checking the output of ``lldb`` (the command line driver) or
+``lldb-test`` using ``FileCheck``. Shell tests are generally small and fast to
+write because they require little boilerplate.
+``lldb-test`` is a relatively new addition to the test suite. It was the first
+tool that was added that is designed for testing. Since then it has been
+continuously extended with new subcommands, improving our test coverage. Among
+other things you can use it to query lldb for symbol files, for object files
+Obviously shell tests are great for testing the command line driver itself or
+the subcomponents already exposed by lldb-test. But when it comes to LLDB's
+vast functionality, most things can be tested both through the driver as well
+as the Python API. For example, to test setting a breakpoint, you could do it
+from the command line driver with ``b main`` or you could use the SB API and do
+something like ``target.BreakpointCreateByName`` [#]_.
+A good rule of thumb is to prefer shell tests when what is being tested is
+relatively simple. Expressivity is limited compared to the API tests, which
+means that you have to have a well-defined test scenario that you can easily
+match with ``FileCheck``.
+Another thing to consider are the binaries being debugged, which we call
+inferiors. For shell tests, they have to be relatively simple. The
+``dotest.py`` test framework has extensive support for complex build scenarios
diff erent variants, which is described in more detail below, while shell
+tests are limited to single lines of shell commands with compiler and linker
+On the same topic, another interesting aspect of the shell tests is that there
+you can often get away with a broken or incomplete binary, whereas the API
+tests almost always require a fully functional executable. This enables testing
+of (some) aspects of handling of binaries with non-native architectures or
+Finally, the shell tests always run in batch mode. You start with some input
+and the test verifies the output. The debugger can be sensitive to its
+environment, such as the the platform it runs on. It can be hard to express
+that the same test might behave slightly
diff erently on macOS and Linux.
+Additionally, the debugger is an interactive tool, and the shell test provide
+no good way of testing those interactive aspects, such as tab completion for
+API tests are located under ``lldb/test/API``. They are run with the
+``dotest.py``. Tests are written in Python and test binaries (inferiors) are
+compiled with Make. The majority of API tests are end-to-end tests that compile
+programs from source, run them, and debug the processes.
+As mentioned before, ``dotest.py`` is LLDB's testing framework. The
+implementation is located under ``lldb/packages/Python/lldbsuite``. We have
+several extensions and custom test primitives on top of what's offered by
+`unittest2 <https://docs.python.org/2/library/unittest.html>`_. Those can be
+Below is the directory layout of the `example API test
+The test directory will always contain a python file, starting with ``Test``.
+Most of the tests are structured as a binary being debugged, so there will be
+one or more source files and a ``Makefile``.
-All three test suites use the `LLVM Integrated Tester
-<https://llvm.org/docs/CommandGuide/lit.html>`_ (lit) as their test driver. The
-test suites can be run as a whole or separately.
-Many of the tests are accompanied by a C (C++, ObjC, etc.) source file. Each
-test first compiles the source file and then uses LLDB to debug the resulting
+ ├── Makefile
+ ├── TestSampleTest.py
+ └── main.c
+Let's start with the Python test file. Every test is its own class and can have
+one or more test methods, that start with ``test_``. Many tests define
+multiple test methods and share a bunch of common code. For example, for a
+fictive test that makes sure we can set breakpoints we might have one test
+method that ensures we can set a breakpoint by address, on that sets a
+breakpoint by name and another that sets the same breakpoint by file and line
+number. The setup, teardown and everything else other than setting the
+breakpoint could be shared.
+Our testing framework also has a bunch of utilities that abstract common
+operations, such as creating targets, setting breakpoints etc. When code is
+shared across tests, we extract it into a utility in ``lldbutil``. It's always
+worth taking a look at `lldbutil
+to see if there's a utility to simplify some of the testing boiler plate.
+Because we can't always audit every existing test, this is doubly true when
+looking at an existing test for inspiration.
+It's possible to skip or `XFAIL
+tests using decorators. You'll see them a lot. The debugger can be sensitive to
+things like the architecture, the host and target platform, the compiler
+version etc. LLDB comes with a range of predefined decorators for these
+ @expectedFailureAll(archs=["aarch64"], oslist=["linux"]
+Another great thing about these decorators is that they're very easy to extend,
+it's even possible to define a function in a test case that determines whether
+the test should be run or not.
+In addition to providing a lot more flexibility when it comes to writing the
+test, the API test also allow for much more complex scenarios when it comes to
+building inferiors. Every test has its own ``Makefile``, most of them only a
+few lines long. A shared ``Makefile`` (``Makefile.rules``) with about a
+thousand lines of rules takes care of most if not all of the boiler plate,
+while individual make files can be used to build more advanced tests.
+Here's an example of a simple ``Makefile`` used by the example test.
+ C_SOURCES := main.c
+ CFLAGS_EXTRAS := -std=c99
+ include Makefile.rules
+Finding the right variables to set can be tricky. You can always take a look at
+but often it's easier to find an existing ``Makefile`` that does something
+similar to what you want to do.
+Another thing this enables is having
diff erent variants for the same test
+case. By default, we run every test for all 3 debug info formats, so once with
+DWARF from the object files, once with gmodules and finally with a dSYM on
+macOS or split DWARF (DWO) on Linux. But there are many more things we can test
+that are orthogonal to the test itself. On GreenDragon we have a matrix bot
+that runs the test suite under
diff erent configurations, with older host
diff erent DWARF versions.
+As you can imagine, this quickly lead to combinatorial explosion in the number
+of variants. It's very tempting to add more variants because it's an easy way
+to increase test coverage. It doesn't scale. It's easy to set up, but increases
+the runtime of the tests and has a large ongoing cost.
+The key take away is that the
diff erent variants don't obviate the need for
+focused tests. So relying on it to test say DWARF5 is a really bad idea.
+Instead you should write tests that check the specific DWARF5 feature, and have
+the variant as a nice-to-have.
+In conclusion, you'll want to opt for an API test to test the API itself or
+when you need the expressivity, either for the test case itself or for the
+program being debugged. The fact that the API tests work with
+variants mean that more general tests should be API tests, so that they can be
+run against the
diff erent variants.
Running The Tests
@@ -244,4 +417,4 @@ A quick guide to getting started with PTVS is as follows:
--arch=i686 --executable D:/src/llvmbuild/ninja/bin/lldb.exe -s D:/src/llvmbuild/ninja/lldb-test-traces -u CXXFLAGS -u CFLAGS --enable-crash-dialog -C d:\src\llvmbuild\ninja_release\bin\clang.exe -p TestPaths.py D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test --no-multiprocess
+.. [#] `https://lldb.llvm.org/python_reference/lldb.SBTarget-class.html#BreakpointCreateByName <https://lldb.llvm.org/python_reference/lldb.SBTarget-class.html#BreakpointCreateByName>`_
More information about the lldb-commits