[lldb-dev] increase timeout for tests?

Jim Ingham via lldb-dev lldb-dev at lists.llvm.org
Tue Mar 13 19:27:23 PDT 2018


It is unfortunate that we have to set really long test timeouts because we are timing the total Test class run, not the individual tests.  It is really convenient to be able to group similar tests in one class, so they can reuse setup and common check methods etc.  But if we're going to have to push the timeouts to something really long because these tests get charged incorrectly, which makes this strategy artificially less desirable.

When we spawn a dotest.py to run each test file, the runner that is doing the timeout hasn't ingested the test class, so it can't do something reasonable like count the number of tests and multiply that into the timeout to get the full test timeout.  I tried to hack around this but I wasn't successful at getting hold of the test configuration in the runner so you could figure out the number of tests there.  If somebody more familiar with the test harness than I am can see a way to do that, that seems like a much better way to go.

But if we can't do that, then we can increase the overall timeout.  Though we might want to override that with LLDB_TEST_TIMEOUT and set it to something lower on the bots.

Jim

> On Mar 13, 2018, at 5:47 PM, Vedant Kumar <vsk at apple.com> wrote:
> 
> As a first step, I think there's consensus on increasing the test timeout to ~3x the length of the slowest test we know of. That test appears to be TestDataFormatterObjC, which takes 388 seconds on Davide's machine. So I propose 20 minutes as the timeout value.
> 
> Separately, regarding x-failed pexpect()-backed tests, I propose deleting them if they've been x-failed for over a year. That seems like a long enough time to wait for someone to step up and fix them given that they're a real testing/maintenance burden. For any group of to-be-deleted tests, like the 22 lldb-mi tests x-failed in all configurations, I'd file a PR about potentially bringing the tests back. Thoughts?
> 
> thanks,
> vedant
> 
>> On Mar 13, 2018, at 11:52 AM, Davide Italiano <dccitaliano at gmail.com> wrote:
>> 
>> On Tue, Mar 13, 2018 at 11:26 AM, Jim Ingham <jingham at apple.com> wrote:
>>> It sounds like we timing out based on the whole test class, not the individual tests?  If you're worried about test failures not hanging up the test suite the you really want to do the latter.
>>> 
>>> These are all tests that contain 5 or more independent tests.  That's probably why they are taking so long to run.
>>> 
>>> I don't object to having fairly long backstop timeouts, though I agree with Pavel that we should choose something reasonable based on the slowest running tests just so some single error doesn't cause test runs to just never complete, making analysis harder.
>>> 
>> 
>> Vedant (cc:ed) is going to take a look at this as he's babysitting the
>> bots for the week. I'll defer the call to him.
> 



More information about the lldb-dev mailing list