[LLVMdev] [RFC] Upstreaming LLVM/SPIR-V converter

Liu, Yaxun (Sam) Yaxun.Liu at amd.com
Sat May 23 22:20:03 PDT 2015


From: Mehdi Amini [mailto:mehdi.amini at apple.com]
Sent: Friday, May 22, 2015 1:54 AM


I’m not sure what you mean exactly by “they do not change the semantics”. Optimizations are data layout dependent and operate with some assumptions related to it, and having the IR optimized with one data layout and processed by a target with a different data layout is not supported. The IR after optimization should only be reused for targets that have a compatible data layout AFAIK.

I’m not sure how this fit in SPIR-V? It seems that SPIR-V serialized from LLVM after optimization won't be target independent, and I’m not sure how is it handled/expressed in SPIR-V?

               Sorry for the late reply. I think your concern about data layout of SPIR-V is valid. My understanding is that in general SPIR-V needs to assume some data layout to express semantics of a program. Optimization should not use a data layout different than that of SPIR-V. We are working on this. Hopefully we will come back with an answer soon.

Thanks,

—
Mehdi





Sam

From: llvmdev-bounces at cs.uiuc.edu<mailto:llvmdev-bounces at cs.uiuc.edu> [mailto:llvmdev-bounces at cs.uiuc.edu] On Behalf Of David Majnemer
Sent: Thursday, May 21, 2015 5:59 PM
To: Sean Silva
Cc: llvmdev at cs.uiuc.edu<mailto:llvmdev at cs.uiuc.edu>
Subject: Re: [LLVMdev] [RFC] Upstreaming LLVM/SPIR-V converter



On Thu, May 21, 2015 at 1:50 PM, Sean Silva <chisophugis at gmail.com<mailto:chisophugis at gmail.com>> wrote:


On Thu, May 21, 2015 at 8:03 AM, Mehdi Amini <mehdi.amini at apple.com<mailto:mehdi.amini at apple.com>> wrote:

On May 20, 2015, at 7:13 AM, Neil Henning <llvm at duskborn.com<mailto:llvm at duskborn.com>> wrote:

On 20/05/15 08:37, Owen Anderson wrote:



On May 19, 2015, at 7:32 PM, Sean Silva <chisophugis at gmail.com<mailto:chisophugis at gmail.com>> wrote:



On Tue, May 19, 2015 at 4:05 PM, Owen Anderson <resistor at mac.com<mailto:resistor at mac.com>> wrote:

On May 19, 2015, at 9:48 AM, Neil Henning <llvm at duskborn.com<mailto:llvm at duskborn.com>> wrote:

The 'backend' in this context is purely so that we can then enable Clang to target SPIR-V in the same consistent manner to all the other targets it supports.

This seems like a terrible reason to choose the architecture of how it’s implemented in LLVM.  The clang driver is part of the LLVM project.  If we need to add support for some kind of special SPIR-V flag akin to -emit-llvm, we can do that.  If a particular frontend vendor wants to customize the flags, they can always do so themselves.

What do you envision as the triple and datalayout when a frontend is compiling to SPIR-V?

I’d recommend having its own Triple.  Not that triples are *not* linked to targets in LLVM.  My understanding of SPIR-V (and a look through the documentation seems to confirm) that it doesn’t specify anything about data layouts, presumably because it needs to accommodate both many GPUs with varying ideas of what sizes and alignments should be.  If anything this pushes me even more strongly that you do *not* want to run SPIR-V-destined IR through any more of LLVM (and particularly the CodeGen infrastructure) than you have to, since a lot of that will want to bake in DataLayout knowledge.

At present in SPIR-V we have a decoration for alignment, so a user could decorate a type to specify a required alignment (which I would have thought in turn would become part of the data layout). Also if we are using a non-logical addressing mode the data layout would have a different pointer width specified (similar to the SPIR/SPIR64 targets in Clang at present). I'll bring it up with the SPIR group at Khronos what the expected behaviour of the alignment decoration is in this context, but at present I would say it would be legal for an LLVM module that is being turned into SPIR-V to have a user-defined data layout.

Are we supposed to be able to optimize this IR? I mean is a valid use-case: frontend->IR-(optimizer)->IR->SPIRV?

I would hope that we would run at least mem2reg/sroa, do those need data layout?

SROA assumes that it has DataLayout.


-- Sean Silva

I think it has been acknowledged that the optimizer need to be aware of the data layout, and that optimizations/transformations that are performed on one data layout are not necessarily valid with another one.

If there is not a single blessed data layout for SPIR-V in the spec, and the front-end can chose one, it seems to me that it has to be “serialized” in SPIR-V as well, isn’t it?
The round-trip SPIR-V -> IR -> SPIR-V does not sound as usuful as it could be if the data layout is not specified.

—
Mehdi









I'm pretty sure that a wide class of frontends for SPIR-V will literally be interested in just generating SPIR-V, with no knowledge about what the ultimate GPU target is; it is in that sense that they are "targeting" SPIR-V. That is, their frontend isn't generating $SPECIFICGPU targeted IR, and then being merely asking to have it serialized in a specific way (a la -emit-llvm); they are generating IR that is meant to be turned into SPIR-V. That is fundamentally different from -emit-llvm (on the other hand, it may not be a target; but it sure smells like one).

I completely agree with you… except for the last sentence.

Honestly, the command line option aspect of this seems like a complete red herring to me.  We are talking about adding support to a data format which we will need to support both serializing IR to and deserializing IR from.  This is exactly the same as the bitcode use case, and not at all like the use case of a target.  We should structure the implementation according to the ways it will actually be used; rewiring a clang driver command line flag to “make it look pretty” is the most trivial part of the entire process.

So it seems to me that we can at least agree on the location of the mainstay of the code - it should reside in lib/SPIRV and be both a reader and writer akin to the Bitcode reader/writer pair. Why don't we at Khronos work on a patch to tip LLVM that will do that, and then we can revisit how a user actually uses our code after that. We should be able to easily follow Owen's approach and see how that works out, worst case if it turns out to not work it would then be trivial to turn that into a thin backend. Seem reasonable?

Cheers,
-Neil.
_______________________________________________
LLVM Developers mailing list
LLVMdev at cs.uiuc.edu<mailto:LLVMdev at cs.uiuc.edu>         http://llvm.cs.uiuc.edu<http://llvm.cs.uiuc.edu/>
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev



_______________________________________________
LLVM Developers mailing list
LLVMdev at cs.uiuc.edu<mailto:LLVMdev at cs.uiuc.edu>         http://llvm.cs.uiuc.edu<http://llvm.cs.uiuc.edu/>
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev

_______________________________________________
LLVM Developers mailing list
LLVMdev at cs.uiuc.edu<mailto:LLVMdev at cs.uiuc.edu>         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150524/66dd2775/attachment.html>


More information about the llvm-dev mailing list