[lldb-dev] increase timeout for tests?
Ted Woodward via lldb-dev
lldb-dev at lists.llvm.org
Wed Mar 14 11:27:49 PDT 2018
I don't see 22 lldb-mi tests xfailed everywhere. I see a lot of tests skipped, but
those are clearly marked as skip on Windows, FreeBSD, Darwin, Linux. I've got
a good chunk of the lldb-mi tests running on Hexagon. I don’t want them deleted,
since I use them.
lldb-mi tests can be hard to debug, but I found that setting the lldb-mi log to be
stdout helps a lot. In lldbmi_testcase.py, in spawnLldbMi, add this line:
self.child.logfile = sys.stdout
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
> -----Original Message-----
> From: lldb-dev [mailto:lldb-dev-bounces at lists.llvm.org] On Behalf Of Vedant
> Kumar via lldb-dev
> Sent: Tuesday, March 13, 2018 7:48 PM
> To: Davide Italiano <dccitaliano at gmail.com>
> Cc: LLDB <lldb-dev at lists.llvm.org>
> Subject: Re: [lldb-dev] increase timeout for tests?
> As a first step, I think there's consensus on increasing the test timeout to ~3x
> the length of the slowest test we know of. That test appears to be
> TestDataFormatterObjC, which takes 388 seconds on Davide's machine. So I
> propose 20 minutes as the timeout value.
> Separately, regarding x-failed pexpect()-backed tests, I propose deleting them
> if they've been x-failed for over a year. That seems like a long enough time to
> wait for someone to step up and fix them given that they're a real
> testing/maintenance burden. For any group of to-be-deleted tests, like the 22
> lldb-mi tests x-failed in all configurations, I'd file a PR about potentially
> bringing the tests back. Thoughts?
> > On Mar 13, 2018, at 11:52 AM, Davide Italiano <dccitaliano at gmail.com>
> > On Tue, Mar 13, 2018 at 11:26 AM, Jim Ingham <jingham at apple.com>
> >> It sounds like we timing out based on the whole test class, not the individual
> tests? If you're worried about test failures not hanging up the test suite the you
> really want to do the latter.
> >> These are all tests that contain 5 or more independent tests. That's
> probably why they are taking so long to run.
> >> I don't object to having fairly long backstop timeouts, though I agree with
> Pavel that we should choose something reasonable based on the slowest
> running tests just so some single error doesn't cause test runs to just never
> complete, making analysis harder.
> > Vedant (cc:ed) is going to take a look at this as he's babysitting the
> > bots for the week. I'll defer the call to him.
> lldb-dev mailing list
> lldb-dev at lists.llvm.org
More information about the lldb-dev