[llvm-dev] [RFC] Lanai backend

Chris Lattner via llvm-dev llvm-dev at lists.llvm.org
Wed Feb 10 14:12:55 PST 2016


I’m responding to several points below, and I want to make one thing perfectly clear:

Given that Google has a great track record of contributing to LLVM, I am not particularly worried about this specific case.  I’m only worried about this scenario because it will set a precedent for future backend submissions.  If we accept Lanai, and someone comes up with an analogous situation, we won’t be able to say:

 “well yes, I agree this is exactly the same situation as Lanai was, but you don’t have a track record of contribution like Google’s, so no, we won’t accept your backend”.

That just won’t fly.  Because of that, I think we *have* to ignore the track record and (surely) good intentions of the organization contributing the code, and look at this from first principles.



On Feb 9, 2016, at 10:44 PM, Chandler Carruth <chandlerc at google.com> wrote:
> Given that, I personally have no objection to accepting the port as an experimental backend.  For it to be a non-experiemental backend, I think it needs to have a buildbot running execution tests for the target.  This can either be a simulator or google could host a buildbot on the hardware they presumably have and make the results public.
> 
> So, while I'm personally happy with this answer, I'm curious: why do you think this important to have?

My goal is specifically to add a burden to the people maintaining the port.

If there is no burden, then it becomes too easy for the contributor to drop the port in and then forget about it.  We end up carrying it around for years, and because people keep updating (non-execution) tests and the source code for the port, we’ll never know that it is broken in reality.  Execution tests are the only real integration tests we have.

> 2) Ensures a fairly easy way to tell if the maintainers have gone out to lunch - no build bot stays working without some attention.

Yes, this is my concern.

> But I also see another option, which someone else mentioned up-thread: simply make only the regression tests be supported. Without a regression test case that exhibits a bug, no reverts or other complaints. It would be entirely up to the maintainer to find and reduce such test cases from any failure of execution tests out-of-tree.

This problematic for two reasons:

1) There is a huge difference between “having tests” and “having good tests that don’t break all the time”.
2) This doesn’t help with port abandonment.

Historically, we have only removed successfully ports because “they don’t actually work”.  AFAIK, we have never removed a working port that “might work well enough for some use case” just because the contributor hasn’t been heard from.  The risk here is much higher with a proprietary target like this, because we have no (independent) way to measure whether the code is working in practice.

Maybe I just carry too many scars here.  Remember that I was the guy who got stuck with rewriting the "new” SPARC port from scratch, because otherwise we “couldn’t” remove the (barely working) “SparcV9" backend, which was dependent on a bunch of obsolete infrastructure.  I didn’t even have a Sparc machine to test on, so I had to SSH into remote machines I got access to.

-Chris

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160210/e254170a/attachment.html>


More information about the llvm-dev mailing list