<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="">I’m responding to several points below, and I want to make one thing perfectly clear:</div><div class=""><br class=""></div><div class="">Given that Google has a great track record of contributing to LLVM, I am not particularly worried about this specific case. I’m only worried about this scenario because it will set a precedent for future backend submissions. If we accept Lanai, and someone comes up with an analogous situation, we won’t be able to say:</div><div class=""><br class=""></div><div class=""> “well yes, I agree this is exactly the same situation as Lanai was, but you don’t have a track record of contribution like Google’s, so no, we won’t accept your backend”.</div><div class=""><br class=""></div><div class="">That just won’t fly. Because of that, I think we *have* to ignore the track record and (surely) good intentions of the organization contributing the code, and look at this from first principles.</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div>On Feb 9, 2016, at 10:44 PM, Chandler Carruth <<a href="mailto:chandlerc@google.com" class="">chandlerc@google.com</a>> wrote:<div><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Given that, I personally have no objection to accepting the port as an experimental backend. For it to be a non-experiemental backend, I think it needs to have a buildbot running execution tests for the target. This can either be a simulator or google could host a buildbot on the hardware they presumably have and make the results public.<br class=""></blockquote><div class=""><br class=""></div><div class="">So, while I'm personally happy with this answer, I'm curious: why do you think this important to have?</div></div></div></div></blockquote><div><br class=""></div><div>My goal is specifically to add a burden to the people maintaining the port.</div><div><br class=""></div><div>If there is no burden, then it becomes too easy for the contributor to drop the port in and then forget about it. We end up carrying it around for years, and because people keep updating (non-execution) tests and the source code for the port, we’ll never know that it is broken in reality. Execution tests are the only real integration tests we have.</div><div><br class=""></div><blockquote type="cite" class=""><div dir="ltr" class=""><div class="gmail_quote"><div class="">2) Ensures a fairly easy way to tell if the maintainers have gone out to lunch - no build bot stays working without some attention.</div></div></div></blockquote><div><br class=""></div><div>Yes, this is my concern.</div><br class=""><blockquote type="cite" class=""><div dir="ltr" class=""><div class="gmail_quote"><div class="">But I also see another option, which someone else mentioned up-thread: simply make only the regression tests be supported. Without a regression test case that exhibits a bug, no reverts or other complaints. It would be entirely up to the maintainer to find and reduce such test cases from any failure of execution tests out-of-tree.</div></div></div></blockquote><div><br class=""></div><div>This problematic for two reasons:</div><div><br class=""></div><div>1) There is a huge difference between “having tests” and “having good tests that don’t break all the time”.</div><div>2) This doesn’t help with port abandonment.</div><div><br class=""></div><div>Historically, we have only removed successfully ports because “they don’t actually work”. AFAIK, we have never removed a working port that “might work well enough for some use case” just because the contributor hasn’t been heard from. The risk here is much higher with a proprietary target like this, because we have no (independent) way to measure whether the code is working in practice.</div><div><br class=""></div><div>Maybe I just carry too many scars here. Remember that I was the guy who got stuck with rewriting the "new” SPARC port from scratch, because otherwise we “couldn’t” remove the (barely working) “SparcV9" backend, which was dependent on a bunch of obsolete infrastructure. I didn’t even have a Sparc machine to test on, so I had to SSH into remote machines I got access to.</div><div><br class=""></div><div>-Chris</div></div><br class=""></body></html>