[llvm-dev] [RFC] Make Lanai backend non-experimental
Matthias Braun via llvm-dev
llvm-dev at lists.llvm.org
Mon Jul 25 17:57:39 PDT 2016
> On Jul 25, 2016, at 5:06 PM, Chandler Carruth via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> On Mon, Jul 25, 2016 at 4:49 PM Matthias Braun via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
> The question I want answered as a community member, is what happens when I push a patch to say the register allocator or the scheduler and the lanai buildbot reports a breakage. While I should obviously revert my patch how would I go forward when I cannot figure out the reason? This is especially bad if I can't get access to additional information to understand the generated instructions; Lanai is still missing from docs/CompilerWriterInfo.rst for example.
> There are two kinds of failures:
> 1) A lanai build bot fails an *execution* test using some (perhaps private) emulator.
> 2) A lanai regression test fails (regardless of what bot it fails on).
> For #1, I think in practice for a number of our backends, this is already going to fall largely on the heads of backend maintainers to get a useful test case to you or otherwise help you re-land your patch. If they can't do so promptly, I would expect it to be reasonable to XFAIL the test until they have time to work on it and make it entirely the backend maintainers' problem.
> For #2, the amount of ISA documentation made available for that architecture directly dictates how much time it is reasonable for you to spend fixing the test. If there are no docs, you shouldn't be doing more than fixing *obvious* test failures. Anything else should just get XFAILed and you should move on and leave it to the maintainers. If there is reasonable documentation to make small test cases understandable to roughly anyone, then great. You can spend somewhat more time trying to update things.
> But for *both* of these, Chris's principle should still apply: does the engagement in common LLVM infrastructure work of the backend maintainers outweigh the non-zero maintenance cost imposed on the upstream project.
> My hope is that for relatively simple backends, the maintenance cost can be close enough to zero that given sufficiently active maintainers it is usually a win for the project to have them in tree. To that end, I like backend maintainers providing enough ISA documentation for developers to quickly make simple updates to tests, and I simultaneously like to *not* have developers make complex updates to tests for a particular backend just because they changed the infrastructure, and instead I'd like to see more things along the lines of you and others saying: "Temporarily XFAIL various target tests, maintainers are aware and will work on updating these tests once the infrastructure change lands." to expedite improvements to the core.
> My view is that keeps the cost of a backend low in order to allow the community to benefit from the diverse developers of those backends.
> None of this requires a simulator though, because by the time you or someone else working on basic infrastructure need access to a simulator or hardware for a platform you're not actively maintaining, I think the cost has usually already gotten too high. We should already be pushing that burden onto those who care about that backend.
I certainly had several occasions in the past where some help from target maintainers would have been necessary or at least would have saved me days of reading target code/guessing of what went wrong. XFAILing is also not possible for llvm-testsuite or stage2 bot failures today.
Admittedly that isn't a new problem with lanai, also looking back the thing I really would have needed would be a way to reproduce the exact same situation the buildbot was in, having documentation and an emulator available doesn't necessarily give you that.
I don't want to make a statement for/against lanai, I merely want to point out that adding a new target isn't necessarily cheap.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev