[llvm-dev] Running debuginfo-tests with Dexter
Reid Kleckner via llvm-dev
llvm-dev at lists.llvm.org
Wed Jun 19 11:05:18 PDT 2019
I'd be in favor of using Dexter to write better debug info quality
As far as goals and requirements go, you've already identified the ability
to drive various debuggers (gdb, lldb, VS, cdb).
Regarding testing Microsoft extensions, we've always had ways to have
platform-specific tests, and I think we can easily extend them.
My initial thought is to set this up as a dexter/ subdirectory of
debuginfo-tests, and add a lit.local.cfg that enables the tests if dexter
is available. Inside there, we can have generic tests, and then Windows,
Mac, Linux, Posix, etc, similar to what we do for asan tests today. You
might want to set things up in CMake to support running the tests in
multiple configurations, perhaps one per debugger, so you could run the
tests with gdb and lldb if you have both installed. The asan test suite
does this for static+dynamic linking.
On Wed, Jun 19, 2019 at 8:27 AM Jeremy Morse via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> Hi llvm-dev@,
> There's been some renewed interest in integration tests for debuginfo,
> checking that debuggers interpret LLVMs output in the way we expect.
> Paul suggested a while back  that our (Sony's) Dexter  tool
> could be a useful driver for running debuginfo tests -- I'd like to
> ask whether people think this would be desirable course to take, and
> what the requirements for using Dexter in these integration tests
> would be.
> Background: the plan with Dexter was to try and quantify the quality
> of debugging experience that a developer received when debugging their
> program. That allows integration testing between LLVM and debuggers,
> and coupled with a test-suite a measurement of "how good" a particular
> compiler is at preserving debug info. A full summary is best found in
> Greg's 5-minute lightning talk [2,3] on the topic. Dexter's
> significant parts are its abstraction of debugger APIs, a language to
> describe expected debug behaviour, and scoring of "how bad"
> divergences from the expected debug behaviour are.
> Some examples of Dexter tests can be found at , where we wrote
> various tests to measure how much debuginfo was destroyed by different
> LLVM passes.
> As far as I understand it, the existing debuginfo-tests  contain
> debugger commands that are fed into a debugger, and the debugger
> output is FileCheck'd. This works directly for gdb, and there's a thin
> layer (llgdb.py) for driving lldb, but windows-based cdb has a very
> different input language and has its own set of tests. An obvious win
> would be unifying these, which is something Dexter could be adapted to
> do. I'm sure most agree, it would be better to declare the expected
> behaviour in some language and have other scripting compare it with
> the real behaviour, than to put highly coupled-to-the-debugger
> interrogation commands and output examination in the tests.
> We can easily specialise Dexter to consider any divergence from
> expected behaviour to be an error, giving us a pass/fail test tool.
> Some existing tests examine types, which Dexter doesn't currently do
> (but we're working on). What other objectives would there be for a
> debugger integration tool? There was mention in  of tests for
> Microsoft specific extensions (I assume extended debugging
> facilities), knowing the scope of extra information involved would
> help us design around it.
> Note that the current Dexter codebase is going to be significantly
> remangled, we're trying to decouple the expected-behaviour language
> from the debugger-abstractions summary of how the program behaved.
>  https://reviews.llvm.org/D54187#1290282
>  https://github.com/SNSystems/dexter
>  https://www.youtube.com/watch?v=XRT_GmpGjXE
>  https://github.com/llvm/llvm-project/tree/master/debuginfo-tests
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev