[llvm-dev] [RFC] Lanai backend

Philip Reames via llvm-dev llvm-dev at lists.llvm.org
Wed Feb 10 09:41:46 PST 2016



On 02/09/2016 08:59 PM, Pete Cooper via llvm-dev wrote:
> Hi Sean
>
> I think you’ve summed it up really well here.
>
> Personally I don’t think we should accept backends for which there is 
> no way to run the code.  The burden (however small) on the community 
> to having an in-tree backend they can’t use is too high IMO.
>
> As you point out ‘no way to run the code’ may mean not having access 
> to HW, or having HW but no API.
>
> NVPTX is a good example.  Now you can take the output from LLVM and 
> run it on HW.  It may or may not be how Nvidia do it in their code, 
> but that doesn’t matter, you can do it.  Same for AMDGPU.
>
> So -1 from me to having backends we can’t make use of.
For the record, I strongly disagree with this position.  I understand 
where you're coming from, but I see great value in having backends 
publicly available even for hardware we can't directly run.  I do see 
the support concerns and we need to address them, but the utter 
rejection of backends based on the non-runnable nature is something I 
strongly disagree with.

To layout a couple of benefits that no one has mentioned so far:
1) This is a highly visible clue as to what Google is running internally 
(admittedly, we don't know for what).  Given how secretive companies 
tend to be about such things, providing an incentive (upstreaming) to 
talk publicly about internal infrastructure is valuable.  I could see 
that being very useful to academics evaluating hardware ideas for instance.
2) Just because a backend generates code which isn't "officially" 
runnable doesn't mean there aren't people who'd be interested in using 
it.  For instance, reverse engineering for security analysis, open 
source software on otherwise closed hardware, and supporting legacy 
products after the manufacture drops support are all realistic use cases.
3) By getting more people involved in the open source project, we have 
an opportunity to further infect people inside companies with the desire 
to be good citizens in the community.  :)  That could be a very positive 
thing for all of us and the project as a whole in the long run.
>
> Finally, one option is to have perpetually experimental backends. 
>  Then all the code is in tree but no-one in tree should ever be 
> expected to update it.  That does have the big advantage that all of 
> the code is there to discuss and the maintainers can make 
> contributions to common code and gain/provide help in the community. 
>  They can also be involved in discussions which impact them such as 
> changes to common code.
>
> Cheers,
> Pete
>> On Feb 9, 2016, at 4:18 PM, Sean Silva via llvm-dev 
>> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>>
>> One data point (IIRC) is that the NVPTX backend sat in tree for a 
>> long time without a way to actually use them. But lately this has 
>> been opening up (e.g. http://llvm.org/docs/CompileCudaWithLLVM.html). 
>> However, the obstacle for NVPTX was mostly a software 
>> proprietary-ness (no way to plug it into the driver stack really, 
>> except via nvidia's own proprietary software), whereas the actual 
>> hardware was available. For the Lanai stuff, it seems like the 
>> hardware is fundamentally not available for purchase.
>>
>> The reverse situation is with e.g. Apple's GPU backends, where the 
>> devices are readily available, but (AFAIK) even if the backend were 
>> open-source you couldn't run the code produced by the open-source 
>> compiler.
>>
>> Or to put it in matrix form (this is all heavily prefixed by "AFAIK"; 
>> corrections welcome):
>>
>> AMDGPU:  InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes
>> NVPTX: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes
>> Lanai: InTree:?   DevicesAvailable:No CanIRunTheCode:No
>> Apple GPU's: InTree:No  DevicesAvailable:Yes CanIRunTheCode:No
>> I couldn't come up with a good name for "Can I Run The Code" column. 
>> Basically it means: "assuming the backend were in open source, could 
>> I actually run the code produced by the open source backend somehow?".
>>
>> I had a quick look at lib/Target and it seems like every backend we 
>> have has "CanIRunTheCode:Yes" in theory.
>> IIRC, the NVPTX stuff used to actually be "No" though?
>>
>> Anyway, just a random thought. Not sure what the conclusion is.
>
>
>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160210/99b1fe35/attachment.html>


More information about the llvm-dev mailing list