[llvm-dev] [RFC] Make Lanai backend non-experimental

Renato Golin via llvm-dev llvm-dev at lists.llvm.org
Tue Jul 26 03:27:27 PDT 2016

On 26 July 2016 at 01:06, Chandler Carruth <chandlerc at google.com> wrote:
> For #1, I think in practice for a number of our backends, this is already
> going to fall largely on the heads of backend maintainers to get a useful
> test case to you or otherwise help you re-land your patch. If they can't do
> so promptly, I would expect it to be reasonable to XFAIL the test until they
> have time to work on it and make it entirely the backend maintainers'
> problem.
> For #2, the amount of ISA documentation made available for that architecture
> directly dictates how much time it is reasonable for you to spend fixing the
> test. If there are no docs, you shouldn't be doing more than fixing
> *obvious* test failures. Anything else should just get XFAILed and you
> should move on and leave it to the maintainers. If there is reasonable
> documentation to make small test cases understandable to roughly anyone,
> then great. You can spend somewhat more time trying to update things.

There is a matter of time, which is entirely subjective. But in
general, I agree with you.

> None of this requires a simulator though, because by the time you or someone
> else working on basic infrastructure need access to a simulator or hardware
> for a platform you're not actively maintaining, I think the cost has usually
> already gotten too high. We should already be pushing that burden onto those
> who care about that backend.

It does, to a point. As a back-end maintainer, (in a remote past) I
had to deal with people XFAILing our tests just because they broke
with their patches.

As Matthias said, test-suite, stage2 and unstable tests don't work
with XFAIL, and for about 6 months (last year) we had to deal with
unstable ARM tests when actually the bug was in Clang (using bad C++
ABI layout assuming x86 alignment).

So, while there is some truth in your points, they don't describe the
complete picture. If a simulator exists, you can test codegen issues,
if ABI docs exist, I can point out where Clang is wrong about its
assumptions, etc.

Your "spectrum" view that, the more the target community puts in, the
more they get back, is pragmatic and sane, but it doesn't solve the
long term maintenance problem. The more XFAILs we put in, the less
reliable the target will be, feeding back the cycle.

In the end, I don't think we want to remove back-ends, and increasing
the maintenance while the target is still good (by waiting a bit more,
trying to understand the problem, etc) can be the difference between a
spiraling down back-end and a successful long term one.

Hope this makes sense.


More information about the llvm-dev mailing list