[llvm-dev] [RFC] Lanai backend

Hal Finkel via llvm-dev llvm-dev at lists.llvm.org
Tue Feb 9 21:15:01 PST 2016


----- Original Message -----
> From: "Pete Cooper via llvm-dev" <llvm-dev at lists.llvm.org>
> To: "Sean Silva" <chisophugis at gmail.com>
> Cc: "llvm-dev" <llvm-dev at lists.llvm.org>
> Sent: Tuesday, February 9, 2016 10:59:58 PM
> Subject: Re: [llvm-dev] [RFC] Lanai backend
> 
> 
> Hi Sean
> 
> 
> I think you’ve summed it up really well here.
> 
> 
> Personally I don’t think we should accept backends for which there is
> no way to run the code. The burden (however small) on the community
> to having an in-tree backend they can’t use is too high IMO.
> 
> 
> As you point out ‘no way to run the code’ may mean not having access
> to HW, or having HW but no API.
> 

Out of curiosity, would the existence of some kind of open-source emulator affect your opinion on this? Or does it need to be actual hardware?

 -Hal 

> 
> NVPTX is a good example. Now you can take the output from LLVM and
> run it on HW. It may or may not be how Nvidia do it in their code,
> but that doesn’t matter, you can do it. Same for AMDGPU.
> 
> 
> So -1 from me to having backends we can’t make use of.
> 
> 
> Finally, one option is to have perpetually experimental backends.
> Then all the code is in tree but no-one in tree should ever be
> expected to update it. That does have the big advantage that all of
> the code is there to discuss and the maintainers can make
> contributions to common code and gain/provide help in the community.
> They can also be involved in discussions which impact them such as
> changes to common code.
> 
> 
> Cheers,
> Pete
> 
> 
> 
> 
> On Feb 9, 2016, at 4:18 PM, Sean Silva via llvm-dev <
> llvm-dev at lists.llvm.org > wrote:
> 
> 
> One data point (IIRC) is that the NVPTX backend sat in tree for a
> long time without a way to actually use them. But lately this has
> been opening up (e.g. http://llvm.org/docs/CompileCudaWithLLVM.html
> ). However, the obstacle for NVPTX was mostly a software
> proprietary-ness (no way to plug it into the driver stack really,
> except via nvidia's own proprietary software), whereas the actual
> hardware was available. For the Lanai stuff, it seems like the
> hardware is fundamentally not available for purchase.
> 
> 
> The reverse situation is with e.g. Apple's GPU backends, where the
> devices are readily available, but (AFAIK) even if the backend were
> open-source you couldn't run the code produced by the open-source
> compiler.
> 
> 
> Or to put it in matrix form (this is all heavily prefixed by "AFAIK";
> corrections welcome):
> 
> 
> AMDGPU: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes
> NVPTX: InTree:Yes DevicesAvailable:Yes CanIRunTheCode :Yes
> Lanai: InTree:? DevicesAvailable:No CanIRunTheCode :No
> Apple GPU's: InTree:No DevicesAvailable:Yes CanIRunTheCode :No
> 
> I couldn't come up with a good name for "Can I Run The Code" column.
> Basically it means: "assuming the backend were in open source, could
> I actually run the code produced by the open source backend
> somehow?".
> 
> 
> I had a quick look at lib/Target and it seems like every backend we
> have has "CanIRunTheCode:Yes" in theory.
> IIRC, the NVPTX stuff used to actually be "No" though?
> 
> 
> Anyway, just a random thought. Not sure what the conclusion is.
> 
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
> 

-- 
Hal Finkel
Assistant Computational Scientist
Leadership Computing Facility
Argonne National Laboratory


More information about the llvm-dev mailing list