[llvm-dev] Running debuginfo-tests with Dexter

Jeremy Morse via llvm-dev llvm-dev at lists.llvm.org
Wed Jun 19 08:27:03 PDT 2019


Hi llvm-dev@,

There's been some renewed interest in integration tests for debuginfo,
checking that debuggers interpret LLVMs output in the way we expect.
Paul suggested a while back [0] that our (Sony's) Dexter [1] tool
could be a useful driver for running debuginfo tests -- I'd like to
ask whether people think this would be desirable course to take, and
what the requirements for using Dexter in these integration tests
would be.

Background: the plan with Dexter was to try and quantify the quality
of debugging experience that a developer received when debugging their
program. That allows integration testing between LLVM and debuggers,
and coupled with a test-suite a measurement of "how good" a particular
compiler is at preserving debug info. A full summary is best found in
Greg's 5-minute lightning talk [2,3] on the topic. Dexter's
significant parts are its abstraction of debugger APIs, a language to
describe expected debug behaviour, and scoring of "how bad"
divergences from the expected debug behaviour are.

Some examples of Dexter tests can be found at [4], where we wrote
various tests to measure how much debuginfo was destroyed by different
LLVM passes.

As far as I understand it, the existing debuginfo-tests [5] contain
debugger commands that are fed into a debugger, and the debugger
output is FileCheck'd. This works directly for gdb, and there's a thin
layer (llgdb.py) for driving lldb, but windows-based cdb has a very
different input language and has its own set of tests. An obvious win
would be unifying these, which is something Dexter could be adapted to
do. I'm sure most agree, it would be better to declare the expected
behaviour in some language and have other scripting compare it with
the real behaviour, than to put highly coupled-to-the-debugger
interrogation commands and output examination in the tests.

We can easily specialise Dexter to consider any divergence from
expected behaviour to be an error, giving us a pass/fail test tool.
Some existing tests examine types, which Dexter doesn't currently do
(but we're working on). What other objectives would there be for a
debugger integration tool? There was mention in [0] of tests for
Microsoft specific extensions (I assume extended debugging
facilities), knowing the scope of extra information involved would
help us design around it.

Note that the current Dexter codebase is going to be significantly
remangled, we're trying to decouple the expected-behaviour language
from the debugger-abstractions summary of how the program behaved.

[0] https://reviews.llvm.org/D54187#1290282
[1] https://github.com/SNSystems/dexter
[2] https://www.youtube.com/watch?v=XRT_GmpGjXE
[3] https://llvm.org/devmtg/2018-04/slides/Bedwell-Measuring_the_User_Debugging_Experience.pdf
[4] https://github.com/jmorse/dexter/tree/f46f13f778484ed5c6f7bf33b8fc2d4837ff7265/tests/nostdlib/llvm_passes
[5] https://github.com/llvm/llvm-project/tree/master/debuginfo-tests

--
Thanks,
Jeremy


More information about the llvm-dev mailing list