[Lldb-commits] [PATCH] D78242: [lldb/Docs] Add some more info about the test suite layout

Jan Kratochvil via Phabricator via lldb-commits lldb-commits at lists.llvm.org
Wed Apr 15 14:55:25 PDT 2020


jankratochvil added inline comments.


================
Comment at: lldb/docs/resources/test.rst:18
+  as *lit tests* in LLVM, although lit is the test driver and ShellTest is the
+  test format that uses ``RUN:`` lines. ``FileCheck`` is used to verify the
+  output.
----------------
https://llvm.org/docs/CommandGuide/FileCheck.html


================
Comment at: lldb/docs/resources/test.rst:21
+* **API tests**: Integration tests that interact with the debugger through the
+  SB API. These are are written in Python and use LLDB's ``dotest.py`` testing
+  framework on top of Python's unittest2.
----------------
s/are are/are/


================
Comment at: lldb/docs/resources/test.rst:22
+  SB API. These are are written in Python and use LLDB's ``dotest.py`` testing
+  framework on top of Python's unittest2.
+
----------------
https://docs.python.org/2/library/unittest.html


================
Comment at: lldb/docs/resources/test.rst:43
+
+Shell tests are located under ``lldb/test/shell``. These tests are generally
+build around checking the output of ``lldb`` (the command line driver) or
----------------
s#lldb/test/shell#lldb/test/Shell#


================
Comment at: lldb/docs/resources/test.rst:44
+Shell tests are located under ``lldb/test/shell``. These tests are generally
+build around checking the output of ``lldb`` (the command line driver) or
+``lldb-test`` using ``FileCheck``. Shell tests are generally small and fast to
----------------
s/build/built/


================
Comment at: lldb/docs/resources/test.rst:45
+build around checking the output of ``lldb`` (the command line driver) or
+``lldb-test`` using ``FileCheck``. Shell tests are generally small and fast to
+write because they require little boilerplate.
----------------
https://llvm.org/docs/CommandGuide/FileCheck.html


================
Comment at: lldb/docs/resources/test.rst:58
+as the Python API. For example, to test setting a breakpoint, you could do it
+from the command line driver with ``b main``` or you could use the SB API and do
+something like ``target.BreakpointCreateByName``.
----------------
s/##```##/##``##/


================
Comment at: lldb/docs/resources/test.rst:59
+from the command line driver with ``b main``` or you could use the SB API and do
+something like ``target.BreakpointCreateByName``.
+
----------------
https://lldb.llvm.org/python_reference/lldb.SBTarget-class.html#BreakpointCreateByName


================
Comment at: lldb/docs/resources/test.rst:97
+  ├── TestSampleTest.py
+  └── main.c
+
----------------
aprantl wrote:
> mention that there is an an actual, literal, example test, too?
I would find most easy the link: https://github.com/llvm/llvm-project/tree/master/lldb/test/API/sample_test


================
Comment at: lldb/docs/resources/test.rst:102
+multiple test methods and share a bunch of common code. For example, for a
+fictive tests that makes sure we can set breakpoints we might have one test
+method that ensures we can set a breakpoint by address, on that sets a
----------------
s/fictive tests/fictive test/


================
Comment at: lldb/docs/resources/test.rst:110
+operations, such as creating targets, setting breakpoints etc. When code is
+shared across tests, we extract it into a utility in ``lldbutil``. It's always
+worth taking a look at  lldbutil to see if there's a utility to simplify some
----------------
There are multiple `lldbutil*` files, there could be a link: https://github.com/llvm/llvm-project/blob/master/lldb/packages/Python/lldbsuite/test/lldbutil.py


================
Comment at: lldb/docs/resources/test.rst:115
+
+It's possible to skip or XFAIL tests using decorators. You'll see them a lot.
+The debugger can be sensitive to things like the architecture, the host and
----------------
Some example: `@expectedFailureAll(archs=["aarch64"], oslist=["linux"])`


================
Comment at: lldb/docs/resources/test.rst:120
+decorators is that they're very easy to extend, it's even possible to define a
+function in a test case that determines whether the test should be run or not.
+
----------------
Probably: `@expectedFailure(checking_function_name)`


================
Comment at: lldb/docs/resources/test.rst:124
+test, the API test also allow for much more complex scenarios when it comes to
+building inferiors. Every tests has its own Makefile, most of them only a few
+lines long. A shared Makefile (``Makefile.rules``) with about a thousand lines
----------------
s/tests/test/


================
Comment at: lldb/docs/resources/test.rst:129
+
+Another things this enables is having different variants for the same test
+case. By default, we run every test for all 3 debug info formats, so once with
----------------
s/things/thing/


================
Comment at: lldb/docs/resources/test.rst:149
+In conclusion, you'll want to opt for an API test to test the API itself or
+when you need the expressively, either for the test case itself or for the
+program being debugged. The fact that the API tests work with different
----------------
expressivity?


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D78242/new/

https://reviews.llvm.org/D78242





More information about the lldb-commits mailing list