[llvm-dev] [RFC] Make Lanai backend non-experimental
Sean Silva via llvm-dev
llvm-dev at lists.llvm.org
Mon Jul 25 21:41:11 PDT 2016
On Mon, Jul 25, 2016 at 4:08 PM, Chris Lattner <clattner at apple.com> wrote:
> On Jul 25, 2016, at 4:02 PM, Sean Silva via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
> I'm sure it is theoretically possible for me to acquire such a system and
>> test on it, but the amount of time and energy it would take make it a
>> practical impossibility.
>> Even for fairly pedestrian backends such as AArch64 and PowerPC, we
>> routinely have people ask active developers on those backends to do the
>> triaging of issues. I've never seen this be a problem in practice, and I'm
>> not convinced that it is something we should actually require to have an
>> upstream backend.
> As I mentioned in the original thread "[llvm-dev] [RFC] Lanai backend", it
> seems to me that the "can I run the code coming out of the backend?"
> property is more about enabling LLVM's users to do interesting things with
> the backend, rather than about LLVM's internal maintenance.
> I'm not sure how that affects the decision.
> MHO, but I almost completely disagree with you. I agree that the question
> comes down to asking how much benefit to the community there is to merging
> the backend, balances vs how much cost there is (your "internal
> maintenance" point).
Was there supposed to be a sentence here explaining what you specifically
disagree with? I'm having a hard time understanding what it is you disagree
with. AFAICT, in the post you quoted and the linked post, I made only two
statements that you might disagree with:
1. "can I run the code coming out of the backend?" property is more about
enabling LLVM's users to do interesting things with the backend, rather
than about LLVM's internal maintenance.
2. As far as maintenance of the LLVM project is concerned, leaning on
active maintainers is the way that issues get solved in practice, so the
"can I run the code" quality isn't relevant for maintenance.
Which one is it? (or was there something else I said in this thread or the
linked thread that you were referring to?)
FWIW, I don't disagree with any of the points you brought up.
> It is unquestionably easier for a contributor to land their backend
> in-tree than to maintain it out-of-tree. This is because landing it in
> tree shifts the maintenance burden from the *contributor* to the
> *community*. If there is low value to the community, then this is a "bad
> deal” for the project as a whole, since there is only so much attention to
> go around.
> My gut (and I'm actually not sure if I exactly consider this a pro or a
> con personally) is that is sounds Stallman-esque to require backends to
> answer "yes" to "can I run the code coming out of the backend?". If you
> squint it is sort of like an anti-tivoization requirement, which would be a
> very ironic requirement for a BSD-licensed project.
> This seems like some weird FUD or mud slinging argument, I’m not sure what
> to make of it.
If I understand what you're saying correctly, you don't see the parallel
Essentially, both anti-tivoization and requiring that one can run the code
that comes out of an LLVM backend involve a situation where a programmer
has access to code (either the GPL code in the anti-tivoization case, or
LLVM source code in the LLVM backend case) and is free to modify it how
ever they want. However, without the extra requirement (either
anti-tivoization clause or requiring the ability to run the code that comes
from an LLVM backend), hardware manufacturers would be able to restrict
that programmer's ability to use their modified code for its intended
purpose (either powering the tivo device or compiling code that is then run
on the device targeted by the LLVM backend).
Hopefully that makes sense. I'm not trying to spread FUD or sling mud or
make an argument, but rather to draw a parallel.
A lot of us are hacker/tinkerers at heart, and so there is some
attractiveness to knowing that if we (or others like us) are interested, we
can go and play around with a backend, learn about the ISA, compile some
code, and run stuff. I've personally done this e.g. for AMDGPU at home,
purely out of curiosity (I purchased a machine with an AMD GPU specifically
for this purpose). I think this resonates with quite a few in the community
(or maybe it is just me :) ). So on the surface it seems like this might
be an interesting consideration. However, I think those people have
converged on putting aside these feelings and seeing that for the project
as a whole, it is not the right thing to look at; and also that there are
existing examples like SystemZ that show that really we didn't feel so
strongly about it after all.
Now, from what I gather from the other thread, your incoming concern is
different, given your different set of experiences:
"Maybe I just carry too many scars here. Remember that I was the guy who
got stuck with rewriting the "new” SPARC port from scratch, because
otherwise we “couldn’t” remove the (barely working) “SparcV9" backend,
which was dependent on a bunch of obsolete infrastructure. I didn’t even
have a Sparc machine to test on, so I had to SSH into remote machines I got
That's fine, but not everybody is coming into the discussion from that
Although, notice that your experiences gave you quite a head start over
many of us -- it seems that at this point we all have come to agree that
the internal concerns of LLVM maintenance are the common consideration, as
you insisted from the start.
 Although the Apple mobile GPU example has consistently convinced me
that "can I run the code" should not play a role in accepting a backend.
-- Sean Silva
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev