[llvm-dev] [RFC] Require PRs for XFAILing tests

Chris Bieneman via llvm-dev llvm-dev at lists.llvm.org
Tue Oct 4 10:15:01 PDT 2016



> On Oct 3, 2016, at 12:05 PM, Krzysztof Parzyszek via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> 
>> On 10/3/2016 12:55 PM, Robinson, Paul wrote:
>> 
>> If the XFAIL-linked PRs are content-free, that is worse than useless.
>> The point is to have those PRs say something useful about the specific
>> XFAIL case, which is a vast improvement over what we have today (i.e.,
>> almost nothing).
> 
> They will be content-free.

I think we can use code review to enforce that they aren't. If adding an XFAIL requires a PR, pre or post-commit review can inspect the PR to ensure that it is not useless.

> 
> Following of processes tends to degrade to meet the required minimum, especially when large groups of people are involved. In this proposal there is no way to enforce meaningful content, only that the PR exists.

I don't think we need additional process to enforce that the PRs are meaningful. Code review should suffice.

> 
> To make sure that the process works, it needs a motivation that goes beyond being solely a requirement. If the PR indicated a problem that will need to be fixed, adding content would become natural. If the PR is there only to keep a test from failing, adding information to it is just extra work.

If the PR is only to keep a test from failing, the test probably shouldn't be XFAIL, and there should be a discussion about that. Requiring a PR could be a way to force a conversation.

> 
> All that aside, what would be "meaningful information" in case of an xfailed test? Doing enough analysis to find out why it fails is often good enough to just fix it. A common thing for PRs is that they only serve as a record that a problem exists, ideally with information on how to reproduce it (they usually come from users, not developers). In case of xfailed testcases we already know that they fail and how to reproduce them.

The bar to be better than what we have today is pretty low, because today we have nothing. A few bread crumbs that can be looked at two, three, or six years later when someone cares is better than what we have now.

One of the situations that results in "XFAIL: *" tests is when a complicated series of changes is reverted, or a feature removed. Often the intention is that it will be re-landed once some other issue is resolved. Without a PR tracking that piecing together the context is very difficult. An example of this kind of situation is r62160 which I've mentioned before introduced 7 XFAILs.

We also get things like r208232, which is a massive change that introduced an XFAIL. There is no explanation in the commit message as to why the test was XFAIL'd, the only bread crumb is the single sentence comment "We do not recognize anymore variable size arrays." Is that a forever thing? If so we should probably delete the test. The exercise of requiring someone to write a PR for that test case would have likely resulted in either the test being deleted or a sentence or two written about why de-linearization doesn't support variable size arrays and what work needs to be done to add that support back. It is likely *way* faster to write a PR with a few sentences than to fix the test.

-Chris

> 
> -Krzysztof
> 
> -- 
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev


More information about the llvm-dev mailing list