[llvm-dev] Responsibilities of a buildbot owner

Jim Ingham via llvm-dev llvm-dev at lists.llvm.org
Mon Jan 10 17:26:20 PST 2022


This situation is somewhat complicated by the fact that Zachary - the only listed code owner for Windows support - hasn’t worked on lldb for quite a while now.  Various people have been helping with the Windows port, but I’m not sure that there’s someone taking overall responsibility for the Windows port.

Greg may have access to a Windows system, but neither Jason nor I work on Windows at all.  In fact, I don’t think anybody listed in the Code Owner’s file for lldb does much work on Windows.  For the health of that port, we probably do need someone to organize the effort and help sort out this sort of thing.

Anyway, looking at the current set of bot failures for this Windows bot, I saw three basic classes of failures (besides the build breaks).

1) Watchpoint Support:

TestWatchLocation.py wasn’t the only or even the most common Watchpoint failure in these test runs:  

For instance in:

https://lab.llvm.org/buildbot/#/builders/83/builds/13600 <https://lab.llvm.org/buildbot/#/builders/83/builds/13600>
https://lab.llvm.org/buildbot/#/builders/83/builds/13543 <https://lab.llvm.org/buildbot/#/builders/83/builds/13543>

The failing test is TestWatchpointMultipleThreads.py.

On:

https://lab.llvm.org/buildbot/#/builders/83/builds/13579 <https://lab.llvm.org/buildbot/#/builders/83/builds/13579>
https://lab.llvm.org/buildbot/#/builders/83/builds/13576 <https://lab.llvm.org/buildbot/#/builders/83/builds/13576>
https://lab.llvm.org/buildbot/#/builders/83/builds/13565 <https://lab.llvm.org/buildbot/#/builders/83/builds/13565>
https://lab.llvm.org/buildbot/#/builders/83/builds/13538 <https://lab.llvm.org/buildbot/#/builders/83/builds/13538>

it’s TestSetWatchlocation.py

On:
https://lab.llvm.org/buildbot/#/builders/83/builds/13550 <https://lab.llvm.org/buildbot/#/builders/83/builds/13550>
https://lab.llvm.org/buildbot/#/builders/83/builds/13508 <https://lab.llvm.org/buildbot/#/builders/83/builds/13508>

It’s TestWatchLocationWithWatchSet.py

On:

https://lab.llvm.org/buildbot/#/builders/83/builds/13528 <https://lab.llvm.org/buildbot/#/builders/83/builds/13528>

It’s TestTargetWatchAddress.py

These are all in one way or another failing because we set a watchpoint, and expected to hit it, and did not.  In the failing tests, we do verify that we got a valid watchpoint back.  We just “continue” expecting to hit it and don't.  The tests don’t seem to be doing anything suspicious that would cause inconsistent behavior, and they aren’t failing on other systems.  It sounds more like the way lldb-server for Windows implements watchpoint setting is flakey in some way.

So these really are “tests correctly showing flakey behavior in the underlying code”.  We could just skip all these watchpoint tests, but we already have 268-some odd tests that are marked as skipIfWindows, most with annotations that some behavior or other is flakey on Windows.  It is not great for the platform support to just keep adding to that count, but if nobody is available to dig into the Windows watchpoint code, we probably need to declare Watchpoint support “in a beta state” and turn off all the tests for it.  But that seems like a decision that should be made by someone with more direct responsibility for the Windows port.

Does our bot strategy cover how to deal with incomplete platform support on some particular platform?  Is the only choice really just turning off all the tests that are uncovering flaws in the underlying implementation?

2) Random mysterious failure:

I also saw one failure here:

https://lab.llvm.org/buildbot/#/builders/83/builds/13513

functionalities/load_after_attach/TestLoadAfterAttach.py

In that one, lldb sets a breakpoint, confirms that the breakpoint got a valid location, then continues and runs to completion w/o hitting the breakpoint.  Again, that test is quite straightforward, and it looks like the underlying implementation, not the test, is what is at fault.

3) lldb-server for Windows test failures:

In these runs:

https://lab.llvm.org/buildbot/#/builders/83/builds/13594 <https://lab.llvm.org/buildbot/#/builders/83/builds/13594>
https://lab.llvm.org/buildbot/#/builders/83/builds/13580 <https://lab.llvm.org/buildbot/#/builders/83/builds/13580>
https://lab.llvm.org/buildbot/#/builders/83/builds/13550 <https://lab.llvm.org/buildbot/#/builders/83/builds/13550>
https://lab.llvm.org/buildbot/#/builders/83/builds/13535 <https://lab.llvm.org/buildbot/#/builders/83/builds/13535>
https://lab.llvm.org/buildbot/#/builders/83/builds/13526 <https://lab.llvm.org/buildbot/#/builders/83/builds/13526>
https://lab.llvm.org/buildbot/#/builders/83/builds/13525 <https://lab.llvm.org/buildbot/#/builders/83/builds/13525>
https://lab.llvm.org/buildbot/#/builders/83/builds/13511
https://lab.llvm.org/buildbot/#/builders/83/builds/13498 <https://lab.llvm.org/buildbot/#/builders/83/builds/13498>

The failure was in the Windows’ lldb-server implementation here:

tools/lldb-server/tests/./LLDBServerTests.exe/StandardStartupTest.TestStopReplyContainsThreadPcs

And there were a couple more lldb-server test fails:

https://lab.llvm.org/buildbot/#/builders/83/builds/13527 <https://lab.llvm.org/buildbot/#/builders/83/builds/13527>
https://lab.llvm.org/buildbot/#/builders/83/builds/13524 <https://lab.llvm.org/buildbot/#/builders/83/builds/13524>

Where the failure is:

tools/lldb-server/TestGdbRemoteExpeditedRegisters.py

MacOS doesn’t use lldb-server, so I am not particularly familiar with it, and didn’t look into these failures further.

Jim




> On Jan 10, 2022, at 3:33 PM, Philip Reames <listmail at philipreames.com> wrote:
> 
> +CC lldb code owners  
> 
> This bot appears to have been restored to the primary buildmaster, but is failing something like 1 in 5 builds due to lldb tests which are flaky.
> 
> https://lab.llvm.org/buildbot/#/builders/83 <https://lab.llvm.org/buildbot/#/builders/83>
> Specifically, this test is the one failing:
> 
> commands/watchpoints/hello_watchlocation/TestWatchLocation.py
> Can someone with LLDB context please either a) address the cause of the flakiness or b) disable the test?
> 
> Philip
> 
> p.s. Please restrict this sub-thread to the topic of stabilizing this bot.  Policy questions can be addressed in the other sub-threads to keep this vaguely understandable.  
> 
> On 1/8/22 1:01 PM, Philip Reames via llvm-dev wrote:
>> In this particular example, we appear to have a bunch of flaky lldb tests.  I personally know absolutely nothing about lldb.  I have no idea whether the tests are badly designed, the system they're being run on isn't yet supported by lldb, or if there's some recent code bug introduced which causes the failure.  "Someone" needs to take the responsibility of figuring that out, and in the meantime spaming developers with inactionable failure notices seems undesirable. 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20220110/10531fbc/attachment.html>


More information about the llvm-dev mailing list