[lldb-dev] lldb python test run on 64-bit Fedora Core 20

Todd Fiala tfiala at google.com
Tue Jul 22 07:44:11 PDT 2014


On Tue, Jul 22, 2014 at 5:01 AM, Matthew Gardiner <mg11 at csr.com> wrote:

> Folks,
>
> I've managed to tweak things (i.e. get llvm/lldb .sos and python
> site-packages under build/lib) in order that I can run dotest.py as follows:
>
>
Great!


> $ /mydir/llvm/tools/lldb/test
>
> LD_LIBRARY_PATH=/mydir/build/lib/ PYTHONPATH=/mydir/build/lib/python2.7/site-packages/
> python dotest.py --executable=/mydir/build/bin/lldb -v -l --compiler=gcc
> -q .
>
>
Okay, glad you found a command line that worked.  I'll get a FC VM up and
work on a fix for that environment.


> I note that the results say:
>
> Ran 1083 tests in 633.125s
>
>
Reasonable - how many cores are you using?  (This was a VM, right?)


> FAILED (failures=1, skipped=619, expected failures=54, unexpected
> successes=15)
>
> Since I've only just managed to get the tests working, are the above
> results reasonable?
>
>
Yes, that's reasonable for Linux.  The skipped are generally Darwin/MacOSX
tests --- there are nearly 2 tests total for every one that can run on
Linux.  The others are generally a variant of debuginfo packaging that is
only available on MacOSX.

The expected failures represent the tests that we don't have working on
Linux (often paired with FreeBSD) that are code and/or test bugs that need
to be addressed.  (If you're ever just feeling like doing some LLDB
spelunking, these are great learning opportunities for one to pick up!)


> That is are expected failures=54, unexpected successes=15 ok?
>
>
The unexpected successes represent one of two things:

1. tests marked XFAIL that are intermittent, and so sometimes pass, falling
into this bucket.  This is the best we can do with these for now until we
get rid of the intermittent nature of the test.  Note the multi-core test
running that the build systems do stress the tests more heavily than when
they are run individually.

2. tests marked XFAIL that always pass now, which should no longer be
marked XFAIL.  The majority do not fall into this category, but it does
represent a state that can occur when we do fix the underlying race and/or
timing issue that caused it to be intermittent in the first place.


> The only actual failure I saw was:
>
> FAIL: test_stdcxx_disasm (TestStdCXXDisassembly.StdCXXDisassembleTestCase)
>       Do 'disassemble' on each and every 'Code' symbol entry from the std
> c++ lib.
>
>
This is really the nugget of result your test run is showing.  I'm not
entirely sure why that one is failing.  It could be a legitimate failure
with changes in your code, or it could be something that surfaces in FC 20
that doesn't elsewhere.  The test run should have made a directory called
"lldb-test-traces".  They go in different places depending on ninja vs.
make builds.  In ninja builds, it will be in your root build dir.  In make
builds it will be in the {my-build-dir}/tools/lldb/test dir.  In that
directory, you get a trace log file (*) for every test run that did not
succeed - either because it was skipped, it failed (i.e. test assertion
failed), it had an error (i.e. it failed but not because of an assert -
something happened that was entirely unexpected like an i/o issue, seg
fault, etc.), or it unexpectedly passed - marked xfail but succeeded.  So -
you should have a file called something like
"Failed*test_stdcxx_disasm*.log" in that directory.  You could look at the
contents of that and see what failed.


> I guess as I run the tests more often, I'll get more of a feel for it, but
> I just wondered if the above was a reasonable baseline.
>
>
Generally the tests are in a state where if it fails, that represents an
issue.  I've spent quite a bit of time trying to get the test suite into
that state, so that an issue represents a real problem.  In your case, it
could be a FC environment issue where it is really a test that - for that
environment - is just not going to ever pass.  In which case we need to
either fix it or annotate it as a known issue and file a bug for it.  For
your particular case, the way to figure that out is to do a build and a
test run against a clean-slate top of tree sync (essentially shelve any
changes you have locally) and see what a clean-slate test run produces.  If
you always see that error, it's a tip-off that the test is broken in your
environment.


> All tips/feedback welcome,
> Matt
>
>
Happy testing!

-Todd


>
>
> Member of the CSR plc group of companies. CSR plc registered in England
> and Wales, registered number 4187346, registered office Churchill House,
> Cambridge Business Park, Cowley Road, Cambridge, CB4 0WZ, United Kingdom
> More information can be found at www.csr.com. Keep up to date with CSR on
> our technical blog, www.csr.com/blog, CSR people blog, www.csr.com/people,
> YouTube, www.youtube.com/user/CSRplc, Facebook,
> www.facebook.com/pages/CSR/191038434253534, or follow us on Twitter at
> www.twitter.com/CSR_plc.
> New for 2014, you can now access the wide range of products powered by
> aptX at www.aptx.com.
> _______________________________________________
> lldb-dev mailing list
> lldb-dev at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev
>



-- 
Todd Fiala | Software Engineer | tfiala at google.com | 650-943-3180
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/lldb-dev/attachments/20140722/a6a025e4/attachment.html>


More information about the lldb-dev mailing list