[zorg] r235045 - [zorg] Fix LitTestCommand unexpected test result when TIMEOUT is returned.

Rick Foos rfoos at codeaurora.org
Wed Apr 15 14:42:37 PDT 2015


Author: rfoos
Date: Wed Apr 15 16:42:36 2015
New Revision: 235045

URL: http://llvm.org/viewvc/llvm-project?rev=235045&view=rev
Log:
[zorg] Fix LitTestCommand unexpected test result when TIMEOUT is returned.

Summary:
Support for LIT tests returning TIMEOUT was added a while back.

If a LIT test returns TIMEOUT, LitTestCommand reports an Unexpected test result error.

    lldb.tests test lldb 28 expected failures Unexpected test result output TIMEOUT 28 expected passes
        stdio
        TIMEOUT: LLDB::15-breakpoint-by-symbol-disable-no-hit

With this fix, a count of timeouts and timeout reason are displayed in the title.

    lldb.tests test lldb 6 unexpected failures 28 expected failures 2 timeout waiting for results 21 expected passes

I'm open to changing the timeout description "waiting for results". I can't think of anything better atm.

My use case is a Pexpect exception on "read non-blocking error" on expect_exact waiting for an LLDB prompt.

Test Plan: Verified internally with tests that return TIMEOUT with lit verbose log output.

Reviewers: gkistanova, jroelofs

Reviewed By: jroelofs

Subscribers: ddunbar, jroelofs, llvm-commits

Projects: #zorg

Differential Revision: http://reviews.llvm.org/D9022

Modified:
    zorg/trunk/zorg/buildbot/commands/LitTestCommand.py

Modified: zorg/trunk/zorg/buildbot/commands/LitTestCommand.py
URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/zorg/buildbot/commands/LitTestCommand.py?rev=235045&r1=235044&r2=235045&view=diff
==============================================================================
--- zorg/trunk/zorg/buildbot/commands/LitTestCommand.py (original)
+++ zorg/trunk/zorg/buildbot/commands/LitTestCommand.py Wed Apr 15 16:42:36 2015
@@ -142,7 +142,8 @@ class LitTestCommand(Test):
                  'UNTESTED':'untested testcases',
                  'REGRESSED':'runtime performance regression',
                  'IMPROVED':'runtime performance improvement',
-                 'UNSUPPORTED':'unsupported tests'}
+                 'UNSUPPORTED':'unsupported tests',
+                 'TIMEOUT':'timeout waiting for results'}
 
   def __init__(self, ignore=[], flaky=[], max_logs=20, parseSummaryOnly=False,
                *args, **kwargs):
@@ -198,13 +199,14 @@ class TestLogObserver(unittest.TestCase)
 
   def test_basic(self):
     obs = self.parse_log("""
-PASS: test-one (1 of 3)
-FAIL: test-two (2 of 3)
-PASS: test-three (3 of 3)
+PASS: test-one (1 of 4)
+FAIL: test-two (2 of 4)
+PASS: test-three (3 of 4)
+TIMEOUT: test-four (4 of 4)
 """)
 
-    self.assertEqual(obs.resultCounts, { 'FAIL' : 1, 'PASS' : 2 })
-    self.assertEqual(obs.step.logs, [('FAIL: test-two', 'FAIL: test-two')])
+    self.assertEqual(obs.resultCounts, { 'FAIL' : 1, 'TIMEOUT' : 1, 'PASS' : 2 })
+    self.assertEqual(obs.step.logs, [('FAIL: test-two', 'FAIL: test-two'), ('TIMEOUT: test-four', 'TIMEOUT: test-four')])
 
   def test_verbose_logs(self):
     obs = self.parse_log("""
@@ -259,7 +261,7 @@ class TestCommand(unittest.TestCase):
 
     # If there were failing tests, the status should be an error (even if the
     # test command didn't report as such).
-    for failing_code in ('FAIL', 'XPASS', 'KPASS', 'UNRESOLVED'):
+    for failing_code in set(['FAIL', 'XPASS', 'KPASS', 'UNRESOLVED', 'TIMEOUT']):
       cmd = self.parse_log("""%s: test-one (1 of 1)""" % (failing_code,))
       self.assertEqual(cmd.evaluateCommand(RemoteCommandProxy(0)), FAILURE)
 





More information about the llvm-commits mailing list