[llvm-dev] [RFC] Lanai backend
Daniel Berlin via llvm-dev
llvm-dev at lists.llvm.org
Tue Feb 9 21:35:00 PST 2016
We consistently tell pretty much everyone we'd love for them to work
upstream. We see orgs work on general optimizations, etc, out of tree, and
tell them "you really should try to upstream it". I can't count the
number of times people say this to others at the dev conference, no *matter
what they are doing*.
It's one thing when the developers in question just want a backend and
contribute nothing else and want everyone else to deal with it and keep it
running. I'd agree that provides burden and nothing else. That seems
pretty unlikely here.
It's not like the optimizations/general infrastructure we are
writing/people improve gets used in a vacuum. It's meant to speed up code
on various platforms. IMHO, given what we try to encourage as best
practices, it seems fundamentally wrong deliberately tell others (who seem
to want to work with the community, and are otherwise likely to be good
contributors): "yeah, we really don't want you to contribute to LLVM
upstream" or "Yeah, well, we'll take your general contributions, but we
don't want the rest of your stuff, no matter how well
designed/architected". Attracting contributors is a two way street that
requires a show of good faith and understanding on both sides.
The one problem you identify, the burden of running code on backends for
proprietary hardware, seems pretty tractable to solve. For example, one
We don't let people revert patches for *runtime* failures in backends that
nobody can run.
(IE you update the API, try to do the right thing, and if it passes the
regression tests, that's that. Even for code gen tests, if it's not obvious
what the right code to generate is, that's the maintainers problem).
So they get the benefit of the API updates people can perform, etc. We
don't end up with the burden of trying to work to fix stuff at runtime
without hardware that can be used, and we get the benefit of what those
contributors are willing to contribute.
(At the very least, IMHO, it seems worth experimenting to see whether the
burdens *do* outweigh the benefits over time. I don't think we *actually*
have good data on this, we are all just kind of guessing based on personal
> I think you’ve summed it up really well here.
> Personally I don’t think we should accept backends for which there is no
> way to run the code. The burden (however small) on the community to having
> an in-tree backend they can’t use is too high IMO.
> As you point out ‘no way to run the code’ may mean not having access to
> HW, or having HW but no API.
> NVPTX is a good example. Now you can take the output from LLVM and run it
> on HW. It may or may not be how Nvidia do it in their code, but that
> doesn’t matter, you can do it. Same for AMDGPU.
> So -1 from me to having backends we can’t make use of.
> Finally, one option is to have perpetually experimental backends. Then
> all the code is in tree but no-one in tree should ever be expected to
> update it. That does have the big advantage that all of the code is there
> to discuss and the maintainers can make contributions to common code and
> gain/provide help in the community. They can also be involved in
> discussions which impact them such as changes to common code.
> On Feb 9, 2016, at 4:18 PM, Sean Silva via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
> One data point (IIRC) is that the NVPTX backend sat in tree for a long
> time without a way to actually use them. But lately this has been opening
> up (e.g. http://llvm.org/docs/CompileCudaWithLLVM.html). However, the
> obstacle for NVPTX was mostly a software proprietary-ness (no way to plug
> it into the driver stack really, except via nvidia's own proprietary
> software), whereas the actual hardware was available. For the Lanai stuff,
> it seems like the hardware is fundamentally not available for purchase.
> The reverse situation is with e.g. Apple's GPU backends, where the devices
> are readily available, but (AFAIK) even if the backend were open-source you
> couldn't run the code produced by the open-source compiler.
> Or to put it in matrix form (this is all heavily prefixed by "AFAIK";
> corrections welcome):
> AMDGPU: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes
> NVPTX: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes
> Lanai: InTree:? DevicesAvailable:No CanIRunTheCode:No
> Apple GPU's: InTree:No DevicesAvailable:Yes CanIRunTheCode:No
> I couldn't come up with a good name for "Can I Run The Code" column.
> Basically it means: "assuming the backend were in open source, could I
> actually run the code produced by the open source backend somehow?".
> I had a quick look at lib/Target and it seems like every backend we have
> has "CanIRunTheCode:Yes" in theory.
> IIRC, the NVPTX stuff used to actually be "No" though?
> Anyway, just a random thought. Not sure what the conclusion is.
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev