<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>+CC lldb code owners <br>
</p>
<p>This bot appears to have been restored to the primary
buildmaster, but is failing something like 1 in 5 builds due to
lldb tests which are flaky.</p>
<p><a class="moz-txt-link-freetext" href="https://lab.llvm.org/buildbot/#/builders/83">https://lab.llvm.org/buildbot/#/builders/83</a></p>
<p>Specifically, this test is the one failing:</p>
<pre class="select-content log"><span class="no-wrap log_o" data-linenumber-content="568"><span class="">commands/watchpoints/hello_watchlocation/TestWatchLocation.py</span></span></pre>
<p>Can someone with LLDB context please either a) address the cause
of the flakiness or b) disable the test?<br>
</p>
<p>Philip</p>
<p>p.s. Please restrict this sub-thread to the topic of stabilizing
this bot. Policy questions can be addressed in the other
sub-threads to keep this vaguely understandable. <br>
</p>
<div class="moz-cite-prefix">On 1/8/22 1:01 PM, Philip Reames via
llvm-dev wrote:<br>
</div>
<blockquote type="cite"
cite="mid:d12cd142-0113-ceb9-4e34-f0391eaacf84@philipreames.com">In
this particular example, we appear to have a bunch of flaky lldb
tests. I personally know absolutely nothing about lldb. I have
no idea whether the tests are badly designed, the system they're
being run on isn't yet supported by lldb, or if there's some
recent code bug introduced which causes the failure. "Someone"
needs to take the responsibility of figuring that out, and in the
meantime spaming developers with inactionable failure notices
seems undesirable. </blockquote>
</body>
</html>