[llvm-dev] [RFC] Lanai backend

Sean Silva via llvm-dev llvm-dev at lists.llvm.org
Tue Feb 9 22:31:29 PST 2016


On Tue, Feb 9, 2016 at 8:59 PM, Pete Cooper <peter_cooper at apple.com> wrote:

> Hi Sean
>
> I think you’ve summed it up really well here.
>
> Personally I don’t think we should accept backends for which there is no
> way to run the code.  The burden (however small) on the community to having
> an in-tree backend they can’t use is too high IMO.
>
> As you point out ‘no way to run the code’ may mean not having access to
> HW, or having HW but no API.
>
> NVPTX is a good example.  Now you can take the output from LLVM and run it
> on HW.  It may or may not be how Nvidia do it in their code, but that
> doesn’t matter, you can do it.  Same for AMDGPU.
>

One thing to note is that all the leg work for getting CUDA working with
the open source toolchain (e.g.
http://llvm.org/docs/CompileCudaWithLLVM.html) was done by Googlers
(Jingyue Wu, Artem Belevich, and probably others I don't remember off the
top of my head). In my experience with LLVM, the Google folks have been
first-rate open source citizens. Unless Jacques' team contributing here is
from a drastically different organization/culture within google (idk, are
they?), I have full faith that this is meant with the best intentions and
won't go the way of a "code drop".


>
> So -1 from me to having backends we can’t make use of.
>

I'm not sure I agree on that. Assume for a second that we are in a
hypothetical world where the reasons for keeping the Apple GPU backends
private vanished (but the actual driver stack was still locked down; i.e.
CanIRunTheCode is still "No"). I would personally say that it would be
beneficial for LLVM to have those backends developed upstream if only so
that we can have Owen's team participating upstream more, as their
expertise is a huge asset to the community.

-- Sean Silva


>
> Finally, one option is to have perpetually experimental backends.  Then
> all the code is in tree but no-one in tree should ever be expected to
> update it.  That does have the big advantage that all of the code is there
> to discuss and the maintainers can make contributions to common code and
> gain/provide help in the community.  They can also be involved in
> discussions which impact them such as changes to common code.
>
> Cheers,
> Pete
>
> On Feb 9, 2016, at 4:18 PM, Sean Silva via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
> One data point (IIRC) is that the NVPTX backend sat in tree for a long
> time without a way to actually use them. But lately this has been opening
> up (e.g. http://llvm.org/docs/CompileCudaWithLLVM.html). However, the
> obstacle for NVPTX was mostly a software proprietary-ness (no way to plug
> it into the driver stack really, except via nvidia's own proprietary
> software), whereas the actual hardware was available. For the Lanai stuff,
> it seems like the hardware is fundamentally not available for purchase.
>
> The reverse situation is with e.g. Apple's GPU backends, where the devices
> are readily available, but (AFAIK) even if the backend were open-source you
> couldn't run the code produced by the open-source compiler.
>
> Or to put it in matrix form (this is all heavily prefixed by "AFAIK";
> corrections welcome):
>
> AMDGPU:      InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes
> NVPTX:       InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes
> Lanai:       InTree:?   DevicesAvailable:No  CanIRunTheCode:No
> Apple GPU's: InTree:No  DevicesAvailable:Yes CanIRunTheCode:No
>
> I couldn't come up with a good name for "Can I Run The Code" column.
> Basically it means: "assuming the backend were in open source, could I
> actually run the code produced by the open source backend somehow?".
>
> I had a quick look at lib/Target and it seems like every backend we have
> has "CanIRunTheCode:Yes" in theory.
> IIRC, the NVPTX stuff used to actually be "No" though?
>
> Anyway, just a random thought. Not sure what the conclusion is.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160209/66d604c6/attachment.html>


More information about the llvm-dev mailing list