[llvm-dev] [RFC] Lanai backend

Sean Silva via llvm-dev llvm-dev at lists.llvm.org
Tue Feb 9 16:19:44 PST 2016


On Tue, Feb 9, 2016 at 4:18 PM, Sean Silva <chisophugis at gmail.com> wrote:

>
>
> On Tue, Feb 9, 2016 at 2:37 PM, Chris Lattner via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
>>
>> > On Feb 9, 2016, at 9:40 AM, Jacques Pienaar via llvm-dev <
>> llvm-dev at lists.llvm.org> wrote:
>> >
>> > Hi all,
>> >
>> > We would like to contribute a new backend for the Lanai processor
>> (derived from the processor described in [1]).
>>
>> Hi Jacques,
>>
>> We generally have a low bar for accepting new “experimental” backends,
>> but I think that this is the first proposal to add a target for a hardware
>> that general LLVM contributors can’t have access to.  As such, we’ll have
>> to figure out as a community whether this makes sense.
>>
>> Here are the tradeoffs I see of accepting the backend:
>>
>> 1) I imagine that there is a big win for you, not having to merge with
>> mainline.  Maintaining an out of tree backend is a pain :-)
>>
>> 2) For the community, this is probably a net loss since changes to common
>> codegen could would be required to update your backend, but no one else in
>> the community would benefit from the target being in mainline.
>>
>> 3) There is probably a small but non-zero benefit to keeping your team
>> working directly on mainline, since you’re more likely to do ancillary work
>> in ToT.  If your development is in mainline, this work is most likely to go
>> into llvm.org instead of into your local branch.
>>
>> 4) There could be an educational benefit of having the backend,
>> particularly if it has unique challenges to overcome.
>>
>>
>> What do others think about this?  I know that several organizations have
>> not even bothered proposing internal-only targets for inclusion in
>> llvm.org, since they would effectively be contributing dead code that
>> the community would have to maintain.
>>
>
> One data point (IIRC) is that the NVPTX backend sat in tree for a long
> time without a way to actually use them. But lately this has been opening
> up (e.g. http://llvm.org/docs/CompileCudaWithLLVM.html). However, the
> obstacle for NVPTX was mostly a software proprietary-ness (no way to plug
> it into the driver stack really, except via nvidia's own proprietary
> software
>

To clarify: I mean that only the proprietary software could use the backend
in a useful way. Not that at some point in the driver stack proprietary
software was needed.

-- Sean Silva


> ), whereas the actual hardware was available. For the Lanai stuff, it
> seems like the hardware is fundamentally not available for purchase.
>
> The reverse situation is with e.g. Apple's GPU backends, where the devices
> are readily available, but (AFAIK) even if the backend were open-source you
> couldn't run the code produced by the open-source compiler.
>
> Or to put it in matrix form (this is all heavily prefixed by "AFAIK";
> corrections welcome):
>
> AMDGPU:      InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes
> NVPTX:       InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes
> Lanai:       InTree:?   DevicesAvailable:No  CanIRunTheCode:No
> Apple GPU's: InTree:No  DevicesAvailable:Yes CanIRunTheCode:No
>
> I couldn't come up with a good name for "Can I Run The Code" column.
> Basically it means: "assuming the backend were in open source, could I
> actually run the code produced by the open source backend somehow?".
>
> I had a quick look at lib/Target and it seems like every backend we have
> has "CanIRunTheCode:Yes" in theory.
> IIRC, the NVPTX stuff used to actually be "No" though?
>
> Anyway, just a random thought. Not sure what the conclusion is.
>
> -- Sean Silva
>
>
>> -Chris
>> _______________________________________________
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160209/ae452891/attachment.html>


More information about the llvm-dev mailing list