[llvm-dev] [RFC] Lanai backend

Chandler Carruth via llvm-dev llvm-dev at lists.llvm.org
Tue Feb 9 22:44:51 PST 2016


On Tue, Feb 9, 2016 at 10:30 PM Chris Lattner <clattner at apple.com> wrote:

>
> > On Feb 9, 2016, at 10:24 PM, Chandler Carruth via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
> >
> > You've raised an important point here Pete, and while I disagree pretty
> strongly with it (regardless of whether Lanai makes sense or not), I'm glad
> that you've surfaced it where we can clearly look at the issue.
> >
> > The idea of "it really should have users outside of just the people who
> have access to the HW" I think is deeply problematic for the project as a
> whole. Where does it stop?
> >
> > While I may have the theoretical ability to get access to an AVR,
> Hexagon, MSP430, SystemZ, or XCore processor... It is a practical
> impossibility. There is no way that I, or I suspect 95% of LLVM
> contributors, will be able to run code for all these platforms. And for
> some of them, I suspect it is already the case that their only users have
> access to specialized, quite hard to acquire hardware (both Hexagon[1] and
> SystemZ come to mind).
>
> Yes, I think this is a reasonable point.  The cheapest SystemZ system is
> somewhere around $75K, so widespread availability isn’t really a relevant
> criteria for accepting that.
>
> Given that, I personally have no objection to accepting the port as an
> experimental backend.  For it to be a non-experiemental backend, I think it
> needs to have a buildbot running execution tests for the target.  This can
> either be a simulator or google could host a buildbot on the hardware they
> presumably have and make the results public.
>

So, while I'm personally happy with this answer, I'm curious: why do you
think this important to have?

I can imagine several reasons myself, in the order they floated to mind:
1) Makes sure it isn't vaporware and actually works, etc.
2) Ensures a fairly easy way to tell if the maintainers have gone out to
lunch - no build bot stays working without some attention.
3) Make it more clear whether a change to LLVM breaks that target

Are there other issues you're thinking about?

Looking just at these three, they all seem reasonable goals. I can see
other ways of accomplishing #1 and #2 (there are lots of ways to establish
community trust), but cannot see any other way of accomplishing #3.

But I also see another option, which someone else mentioned up-thread:
simply make only the regression tests be supported. Without a regression
test case that exhibits a bug, no reverts or other complaints. It would be
entirely up to the maintainer to find and reduce such test cases from any
failure of execution tests out-of-tree.

I prefer this option for perhaps a strange reason: as an LLVM developer I
would find it *more* appealing to only ever have to care about
<insert-random-target> once the maintainer produced a reduced test case for
me. Having a buildbot check the execution and fail is actually not terribly
helpful to me in most cases. Sometimes I was already suspicious of a patch
and it helpfully confirms my suspicions, but most of the time I'm going to
need a test case to do anything useful with the failure and I won't have
the tools necessary to produce that test case merely because there is a
build bot.

Anyways, as I said, I think this is somewhat theoretical at this point, but
it seems useful to kind of pin down both what our expectations are and
*why*.

-Chandler
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160210/e70fc747/attachment.html>


More information about the llvm-dev mailing list