[lldb-dev] LLDB test questions

Todd Fiala via lldb-dev lldb-dev at lists.llvm.org
Fri Jan 22 09:55:51 PST 2016


Hi Ted!

I hope you don't mind - I'm going to CC lldb-dev since there is some useful
general info in here for others who are getting started with the test
system.  (And others can fact-check anything I may gloss over here).

On Thu, Jan 21, 2016 at 2:00 PM, Ted Woodward <ted.woodward at codeaurora.org>
wrote:

> Hi Todd,
>
>
>
> I’m working on getting the LLDB tests running with Hexagon, but I’m
> confused about some things. Here are my initial results:
>
> ===================
>
> Test Result Summary
>
> ===================
>
> Test Methods:        967
>
> Reruns:                2
>
> Success:             290
>
> Expected Failure:     25
>
> Failure:              89
>
> Error:               111
>
> Exceptional Exit:     13
>
> Unexpected Success:    2
>
> Skip:                434
>
> Timeout:               3
>
> Expected Timeout:      0
>
>
>
>
>
> First question – How can I tell what certain tests are doing?
>

There are two places you can look for more information.  Of those
categories that are counted, the following should list the specific tests
that are failing in a section called something like "Test Details" directly
above the "Test Result Summary" section:
* Failure
* Error
* Exceptional Exit
* Timeout

Those will tell you which test method names (and file path relative to the
packages/Python/lldbsuite/test directory).  You should also get a stack
trace above that section when it is counting out the tests that are running.

But, that's not really the heavy detail info.  The heavy details are in a
"test session directory", which by default is created in the current
directory when the test suite is kicked off, and has a name something like:

2016-01-21-15_14_26/

This is a date/time encoded directory with a bunch of files in it.  For
each of the classes of failure above, you should have a file that beings
with something like:

"Failure-"
"Error-"

(i.e. the status as the first part), followed by the test
package/class/name/architecture, then followed by .log.  That file records
build commands and any I/O from the process.  That is the best place to
look when a test goes wrong.

Here is an example failure filename from a test suite run I did on OS X
that failed recently:

Failure-TestPublicAPIHeaders.SBDirCheckerCase.test_sb_api_directory_dsym-x86_64-clang.log



> For example, TestExprPathSynthetic, from
> packages/Python/lldbsuite/test/python_api/exprpath_synthetic/TestExprPathSynthetic.py
> .
>
>
>
> TestExprPathSynthetic.py has:
>
> import lldbsuite.test.lldbinline as lldbinline
>
> import lldbsuite.test.lldbtest as lldbtest
>
>
>
> lldbinline.MakeInlineTest(__file__, globals(),
> [lldbtest.skipIfFreeBSD,lldbtest.
>
> skipIfLinux,lldbtest.skipIfWindows])
>
>
>
> I’m going to want to add a skip in there for Hexagon, but what does this
> test actually do?
>

I haven't worked directly, but in general, the MakeInlineTest tests are
used to generate the python side of the test run logic, and assume there is
a main.c/main.cpp/main.mm file in the directory (as there is in that one).
The main.* file will have comments with executable expressions in them that
basically contain everything needed to drive the test using the compiled
main.* file as the test inferior subject.

This particular test looks like it is attempting to test synthetic children
in expression parsing for objective C++.  This one probably should say
something like "skipUnlessDarwin" rather than manually adding all the other
platforms that should skip. (Objective-C++ and Cocoa tests should only run
on Darwin).


>
>
>
>
>
> Second question – a lot of tests are skipped. Are the skipped tests always
> skipped because of something like @benchmarks_test being false, or
> @skipIfFreeBSD being true?
>
>
>
>
>

Skipped tests are any test that is listed as @skipifXYZ, @unittest2.skip or
the like.  Skips happen for a ton of reasons.  Most of our tests now get
turned automagically into 3 or so tests - one for each type of debuginfo
that a test inferior subject can be compiled as.  Those are:
* dsym-style debuginfo, only available on OS X
* dwarf (in-object-file dwarf, all platforms generally have this)
* dwo (only on Linux right now I think)

So each test method defined typically has three variants run, one created
for each debuginfo type.  On any platform, only two (at most) typically
run, the rest being listed as skipped.  A large number of skips will be due
to that.  On non-Darwin platforms, a larger number will be skipped because
they are OS X-specific, like Objective-C/Objective-C++.

That test session directory will show you all the skipped ones.  They start
with "SkippedTest-".


>
> Third question – I see things like this:
>
>         self.build(dictionary=self.getBuildFlags())
>
> (from
> packages/Python/lldbsuite/test/functionalities/thread/step_out/TestThreadStepOut.py
> )
>
> How do I see what the build flags are? How does it know which file to
> build?
>
>
>

The test session directory will have a separate log for each test method
that is run.  It will show the build commands that were run.  In your case,
I'd expect you have a --platform-name specified to "hexagon" or
"hexagon-simulator" or something like that.  You'd want the build system to
know about that platform, and "do the right thing" when building test
inferior programs for it.  The test runner also supports a -C specifying
the compiler to use and a -A to specify the architecture(s) to run.  But I
think you want to rig things so that --platform-name {your-platform} works
the way you need.

The build rules typically come from the Makefile in the test directory.  If
you take a few of them and look, you'll see they include a set of rules
from packages/Python/lldbsuite/test/make/Makefile.rules.  I'd expect you'd
end up adding info in there for doing the right thing for the hexagon
platforms.


>
>
>
> Fourth question – IntegerTypesTestCase.test_unsigned_short_type_dwarf
>
> I don’t see test_unsigned_short_type_dwarf, just test_unsigned_short_type
>
> What does
>
>     def test_unsigned_short_type(self):
>
>         """Test that 'unsigned_short'-type variables are displayed
> correctly."""
>
>         self.build_and_run('unsigned_short.cpp', set(['unsigned',
> 'short']))
>
> actually do?
>
>
>

That class derives from something called "GenericTester".  That class
likely defines the build_and_run(), which looks like it assumes the output
that gets printed for any given type via the test inferior with
basic_type.cpp being the "test inferior driver" (i.e. the thing that prints
all the type contents).  It looks like it is a templated style of test that
runs against a bunch of different concrete types.  The AbstractBase.py
class in that directory (packages/Python/lldbsuite/test/types) has some
documentation that explains how it works.

I hope that gets you going!


>
>
>
>
> That’s it for now…before my brain melts! J
>
>
>

:-)


> Thanks,
>
>
>
> Ted
>
>
>
>
My pleasure!

-Todd


>
>
>
>
>
>
> --
>
> Qualcomm Innovation Center, Inc.
>
> The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a
> Linux Foundation Collaborative Project
>
>
>



-- 
-Todd
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/lldb-dev/attachments/20160122/1209a282/attachment.html>


More information about the lldb-dev mailing list