<div dir="ltr">I watched the talk on the design, and it all made sense to me. I can't claim to have a deep knowledge of the requirements of GPU architectures, but I can say that this is basically the kind of stuff we had in mind when the token type was designed. What you are saying about modelling these special GPU operations as accessing inaccessible memory makes sense to me, but again, I am not an expert.<div><br></div><div>One of the challenges we've faced when trying to create regions for unpacked call sequences[1] is that unreachable code elimination can often "break" a region by deleting the token consumer. It's not clear if your proposal suffers from this problem, but it's something to keep in mind, since optimizers can often discover that a plain store is storing to a null pointer, that can become unreachable, and there goes the entire rest of the function, leaving you with a half-open region. I can't imagine a plausible series of transforms on a reasonable GPU program would end up in this situation, though.</div><div>[1] <a href="http://lists.llvm.org/pipermail/llvm-dev/2020-January/138627.html">http://lists.llvm.org/pipermail/llvm-dev/2020-January/138627.html</a></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Oct 28, 2020 at 3:14 PM Nicolai Hähnle via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi all,<br>
<br>
As there has been quiet on the topic for quite some time now, I'm<br>
pinging this thread to see if ideally we can just go ahead with the<br>
current proposal, or if/how more discussion is required.<br>
<br>
Quick pointer to the proposed LangRef changes for the `convergent`<br>
attribute is here: <a href="https://reviews.llvm.org/D85603" rel="noreferrer" target="_blank">https://reviews.llvm.org/D85603</a><br>
<br>
The presentation from the dev meeting three weeks ago is here:<br>
<a href="https://www.youtube.com/watch?v=_Z5DuiVCFAw" rel="noreferrer" target="_blank">https://www.youtube.com/watch?v=_Z5DuiVCFAw</a><br>
<br>
Thanks,<br>
Nicolai<br>
<br>
<br>
On Tue, Aug 18, 2020 at 10:44 AM Nicolai Hähnle <<a href="mailto:nhaehnle@gmail.com" target="_blank">nhaehnle@gmail.com</a>> wrote:<br>
><br>
> Hi Hal,<br>
><br>
> On Mon, Aug 17, 2020 at 8:30 PM Hal Finkel <<a href="mailto:hfinkel@anl.gov" target="_blank">hfinkel@anl.gov</a>> wrote:<br>
> > I agree that the new scheme seems better for several reasons. So it<br>
> > seems like your recommendation is that we consider the existing<br>
> > attribute usage deprecated?<br>
><br>
> Yes.<br>
><br>
> Some form of auto-upgrade could be attempted via the<br>
> ConvergenceControlHeuristic of <a href="https://reviews.llvm.org/D85609" rel="noreferrer" target="_blank">https://reviews.llvm.org/D85609</a>, though<br>
> that's slightly big for on-the-fly upgrading in the bitcode reader. So<br>
> while the groundwork for potentially doing it is there, I don't see a<br>
> strong need or desire for it at the moment.<br>
><br>
> Cheers,<br>
> Nicolai<br>
><br>
><br>
> > Or should we auto-upgrade it to something?<br>
> ><br>
> > -Hal<br>
> ><br>
> ><br>
> > ><br>
> > ><br>
> > >> -Hal<br>
> > >><br>
> > >><br>
> > >>><br>
> > >>>> -Hal<br>
> > >>>><br>
> > >>>> On 8/9/20 10:03 AM, Nicolai Hähnle via llvm-dev wrote:<br>
> > >>>>> Hi all,<br>
> > >>>>><br>
> > >>>>> please see <a href="https://reviews.llvm.org/D85603" rel="noreferrer" target="_blank">https://reviews.llvm.org/D85603</a> and its related changes for<br>
> > >>>>> our most recent and hopefully final attempt at putting the<br>
> > >>>>> `convergent` attribute on a solid theoretical foundation in a way that<br>
> > >>>>> is useful for modern GPU compiler use cases. We have clear line of<br>
> > >>>>> sight to enabling a new control flow implementation in the AMDGPU<br>
> > >>>>> backend which is built on this foundation. I have started upstreaming<br>
> > >>>>> some ground work recently. We expect this foundation to also be usable<br>
> > >>>>> by non-GPU whole-program vectorization environments, if they choose to<br>
> > >>>>> do so.<br>
> > >>>>><br>
> > >>>>> What follows is a (not quite) brief overview of what this is all<br>
> > >>>>> about, but do see the linked review and its "stack" for a fuller story.<br>
> > >>>>><br>
> > >>>>><br>
> > >>>>> The Problem<br>
> > >>>>> -----------<br>
> > >>>>> `convergent` was originally introduced in LLVM to label "barrier"<br>
> > >>>>> instructions in GPU programming languages. Those put some unusual<br>
> > >>>>> constraints on program transforms that weren't particularly well<br>
> > >>>>> understood at the time. Since then, GPU languages have introduced<br>
> > >>>>> additional instructions such as wave shuffles or subgroup reductions<br>
> > >>>>> that made the problem harder and exposed some shortcomings of the<br>
> > >>>>> current definition of `convergent`. At the same time, those additional<br>
> > >>>>> instructions helped illuminate what the problem really is.<br>
> > >>>>><br>
> > >>>>> Briefly, in whole-program vectorization environments such as GPUs, we<br>
> > >>>>> can expose opportunities for performance optimizations to developers<br>
> > >>>>> by allowing data exchange between threads that are grouped together as<br>
> > >>>>> lanes in a vector. As long as control flow is uniform, i.e. all<br>
> > >>>>> threads of the vector follow the same path through the CFG, it is<br>
> > >>>>> natural that all threads assigned to the same vector participate in<br>
> > >>>>> this data exchange. When control flow diverges, some threads may be<br>
> > >>>>> unavailable for the data exchange, but we still need to allow data<br>
> > >>>>> exchange among the threads that _are_ available.<br>
> > >>>>><br>
> > >>>>> This means that those data exchange operations -- i.e., `convergent`<br>
> > >>>>> operations -- communicate among a set of threads that implicitly<br>
> > >>>>> depends on the position of the operation in control flow.<br>
> > >>>>><br>
> > >>>>> The problem boils down to defining how the set of communicating<br>
> > >>>>> threads is determined.<br>
> > >>>>><br>
> > >>>>> This problem inherently has a large target- and environment-specific<br>
> > >>>>> component. Even if control flow is fully uniform, there is the<br>
> > >>>>> question of which threads are grouped together in the first place. In<br>
> > >>>>> the same way that LLVM IR has no built-in understanding of how threads<br>
> > >>>>> come to be in the first place (even in the more well-known<br>
> > >>>>> multi-threaded CPU programming world where convergent operations don't<br>
> > >>>>> exist), we don't concern ourselves at all with these types of<br>
> > >>>>> questions here. For the purpose of target-independent LLVM IR<br>
> > >>>>> semantics, we simply assume that they are answered _somehow_ and<br>
> > >>>>> instead focus on the part of the problem that relates to control flow.<br>
> > >>>>><br>
> > >>>>> When looking at high-level language source, it is often "intuitively<br>
> > >>>>> obvious" which threads should communicate. This is not the case in an<br>
> > >>>>> unstructured CFG. Here's an example of a non-trivial loop where a<br>
> > >>>>> reduction is used (the result of the reduction is the sum of the input<br>
> > >>>>> values of all participating threads):<br>
> > >>>>><br>
> > >>>>> A:<br>
> > >>>>> br label %B<br>
> > >>>>><br>
> > >>>>> B:<br>
> > >>>>> ...<br>
> > >>>>> %sum.b = call i32 @subgroupAdd(i32 %v) ; convergent<br>
> > >>>>> ...<br>
> > >>>>> br i1 %cc1, label %B, label %C<br>
> > >>>>><br>
> > >>>>> C:<br>
> > >>>>> ...<br>
> > >>>>> br i1 %cc2, label %B, label %D<br>
> > >>>>><br>
> > >>>>> D:<br>
> > >>>>> ; loop exit<br>
> > >>>>><br>
> > >>>>> Suppose this code is executed by two threads grouped in a (very short)<br>
> > >>>>> vector, and the threads execute the following sequences of basic blocks:<br>
> > >>>>><br>
> > >>>>>> Thread 1: A B B C D<br>
> > >>>>>> Thread 2: A B C B C D<br>
> > >>>>> There are two different intuitive ways of "aligning" the threads for<br>
> > >>>>> the purpose of communication between convergent operations:<br>
> > >>>>><br>
> > >>>>>> Align based on natural loops:<br>
> > >>>>>> Thread 1: A B - B C D<br>
> > >>>>>> Thread 2: A B C B C D<br>
> > >>>>>> Align based on a nested loop interpretation:<br>
> > >>>>>> Thread 1: A B B C - - D<br>
> > >>>>>> Thread 2: A B - C B C D<br>
> > >>>>> (Explanation of the alignment: In the first alignment, %sum.b is<br>
> > >>>>> always the sum of the input values from both threads. In the second<br>
> > >>>>> alignment, the first computation of %sum.b is collaborative between<br>
> > >>>>> the two threads; the second one in each thread simply returns the<br>
> > >>>>> input value for that thread.)<br>
> > >>>>><br>
> > >>>>> The example only has a single natural loop, but it could result from a<br>
> > >>>>> high-level language source that has two nested do-while loops, so the<br>
> > >>>>> second interpretation may well be what the programmer intended.<br>
> > >>>>><br>
> > >>>>> It has often been proposed that the alignment should be defined purely<br>
> > >>>>> based on the IR shown above, such as by always aligning based on<br>
> > >>>>> natural loops. This doesn't work, even if we ignore the possibility of<br>
> > >>>>> irreducible control flow. In the example, we could in principle<br>
> > >>>>> generate IR as follows if the "nested loop alignment" is desired:<br>
> > >>>>><br>
> > >>>>> A:<br>
> > >>>>> br label %outer.hdr<br>
> > >>>>><br>
> > >>>>> outer.hdr:<br>
> > >>>>> br label %B<br>
> > >>>>><br>
> > >>>>> B:<br>
> > >>>>> ...<br>
> > >>>>> %sum.b = call i32 @subgroupAdd(i32 %v) ; convergent<br>
> > >>>>> ...<br>
> > >>>>> br i1 %cc1, label %B, label %C<br>
> > >>>>><br>
> > >>>>> C:<br>
> > >>>>> ...<br>
> > >>>>> br i1 %cc2, label %outer.hdr, label %D<br>
> > >>>>><br>
> > >>>>> D:<br>
> > >>>>> ; loop exit<br>
> > >>>>><br>
> > >>>>> From a single-threaded perspective, it would be correct to short-cut<br>
> > >>>>> all paths through outer.hdr, but this results in the original IR and<br>
> > >>>>> therefore different behavior for the convergent operation (under such<br>
> > >>>>> a proposal). Worse, the part of the IR which makes the short-cutting<br>
> > >>>>> incorrect -- i.e., the affected convergent operation -- is potentially<br>
> > >>>>> far away from the paths that we're short-cutting. We want to avoid<br>
> > >>>>> this "spooky action at a distance" because it puts an undue burden on<br>
> > >>>>> generic code transforms.<br>
> > >>>>><br>
> > >>>>><br>
> > >>>>> The Solution<br>
> > >>>>> ------------<br>
> > >>>>> We introduce explicit annotations in the IR using "convergence tokens"<br>
> > >>>>> that "anchor" a convergent operation with respect to some other point<br>
> > >>>>> in control flow or with respect to the function entry. The details are<br>
> > >>>>> in the linked Phabricator review, so I will simply illustrate here how<br>
> > >>>>> this solves the example above.<br>
> > >>>>><br>
> > >>>>> To obtain the original natural loop alignment, we augment the example<br>
> > >>>>> as follows:<br>
> > >>>>><br>
> > >>>>> A:<br>
> > >>>>> %anchor = call token @llvm.experimental.convergence.anchor()<br>
> > >>>>> br label %B<br>
> > >>>>><br>
> > >>>>> B:<br>
> > >>>>> %loop = call token @llvm.experimental.convergence.loop() [<br>
> > >>>>> "convergencectrl"(token %anchor) ]<br>
> > >>>>> ...<br>
> > >>>>> %sum.b = call i32 @subgroupAdd(i32 %v) [ "convergencectrl"(token<br>
> > >>>>> %loop) ]<br>
> > >>>>> ...<br>
> > >>>>> br i1 %cc1, label %B, label %C<br>
> > >>>>><br>
> > >>>>> C:<br>
> > >>>>> ...<br>
> > >>>>> br i1 %cc2, label %B, label %D<br>
> > >>>>><br>
> > >>>>> D:<br>
> > >>>>> ; loop exit<br>
> > >>>>><br>
> > >>>>> To obtain the loop nest alignment, we augment it as follows:<br>
> > >>>>><br>
> > >>>>> A:<br>
> > >>>>> %anchor = call token @llvm.experimental.convergence.anchor()<br>
> > >>>>> br label %outer.hdr<br>
> > >>>>><br>
> > >>>>> outer.hdr:<br>
> > >>>>> %outer = call token @llvm.experimental.convergence.loop() [<br>
> > >>>>> "convergencectrl"(token %anchor) ]<br>
> > >>>>> br label %B<br>
> > >>>>><br>
> > >>>>> B:<br>
> > >>>>> %inner = call token @llvm.experimental.convergence.loop() [<br>
> > >>>>> "convergencectrl"(token %outer) ]<br>
> > >>>>> ...<br>
> > >>>>> %sum.b = call i32 @subgroupAdd(i32 %v) [ "convergencectrl"(token<br>
> > >>>>> %inner) ]<br>
> > >>>>> ...<br>
> > >>>>> br i1 %cc1, label %B, label %C<br>
> > >>>>><br>
> > >>>>> C:<br>
> > >>>>> ...<br>
> > >>>>> br i1 %cc2, label %outer.hdr, label %D<br>
> > >>>>><br>
> > >>>>> D:<br>
> > >>>>> ; loop exit<br>
> > >>>>><br>
> > >>>>> We end up with two nested natural loops as before, but short-cutting<br>
> > >>>>> through outer.hdr is easily seen as forbidden because of the "loop<br>
> > >>>>> heart" intrinsic along the path through outer.hdr. (We call it a<br>
> > >>>>> "heart" because intuitively that's where loop iterations are counted<br>
> > >>>>> for the purpose of convergent operations, i.e. we count "heartbeats"<br>
> > >>>>> of the loop.) Having the non-removable outer.hdr block may seem like<br>
> > >>>>> an inefficiency, but at least in the AMDGPU backend we end up needing<br>
> > >>>>> that basic block anyway to implement the desired behavior. We don't<br>
> > >>>>> literally count loop iterations in the AMDGPU backend, but we end up<br>
> > >>>>> doing something isomorphic that also requires the block. We suspect<br>
> > >>>>> that other implementations will have similar requirements. In any<br>
> > >>>>> case, backends are free to optimize the block away using<br>
> > >>>>> target-specific handling.<br>
> > >>>>><br>
> > >>>>><br>
> > >>>>> Have you tried other solutions?<br>
> > >>>>> -------------------------------<br>
> > >>>>> Yes, we've tried many things over the years. I've personally been<br>
> > >>>>> thinking about and working on this problem on and off for a<br>
> > >>>>> significant amount of time over the last four years. I supervised a<br>
> > >>>>> practical master thesis on a related topic during this time. Other<br>
> > >>>>> people have thought about the problem a lot as well. Discussions with<br>
> > >>>>> many people in the LLVM community, inside AMD, and elsewhere have all<br>
> > >>>>> helped our understanding of the problem. Many different ideas were<br>
> > >>>>> considered and ultimately rejected in the process.<br>
> > >>>>><br>
> > >>>>> All attempts to solve the problem _without_ some form of "convergence<br>
> > >>>>> token" have suffered from spooky action at a distance. This includes<br>
> > >>>>> the current definition of `convergent` in the LangRef.<br>
> > >>>>><br>
> > >>>>> We tried _very_ hard to find a workable definition of `convergent`<br>
> > >>>>> that doesn't talk about groups of threads in the LangRef, but<br>
> > >>>>> ultimately came to the conclusion that thinking about convergent<br>
> > >>>>> operations inherently involves threads of execution, and it's much<br>
> > >>>>> easier to just accept that. As I've already written above, convergent<br>
> > >>>>> operations really are a form of (often non- or inaccessible-memory)<br>
> > >>>>> communication among threads where the set of communicating threads is<br>
> > >>>>> affected by the static position of the operation in control flow.<br>
> > >>>>><br>
> > >>>>> The proposal contains no lock-step execution, even though that's what<br>
> > >>>>> many people think of when they think about GPU programming models. It<br>
> > >>>>> is purely expressed in terms of communicating threads, which could in<br>
> > >>>>> theory just be threads on a CPU. We believe that this is more in line<br>
> > >>>>> with how LLVM works. Besides, we found lock-step execution semantics<br>
> > >>>>> to be undesirable from first principles, because they prevent certain<br>
> > >>>>> desirable program transforms or at least make them much more difficult<br>
> > >>>>> to achieve.<br>
> > >>>>><br>
> > >>>>> The most recent earlier proposal we brought to the LLVM community is<br>
> > >>>>> here: <a href="https://reviews.llvm.org/D68994" rel="noreferrer" target="_blank">https://reviews.llvm.org/D68994</a>. That one already used a form of<br>
> > >>>>> convergence token, but we ultimately rejected it for two main reasons.<br>
> > >>>>> First, the convergence control intrinsics described there create more<br>
> > >>>>> "noise" in the IR, i.e. there are more intrinsic calls than with what<br>
> > >>>>> we have now. Second, the particular formalism of dynamic instances<br>
> > >>>>> used there had some undesirable side effects that put undue burden on<br>
> > >>>>> transforms that should be trivial like dead-code elimination.<br>
> > >>>>><br>
> > >>>>> The current proposal still talks about dynamic instances, but there is<br>
> > >>>>> no longer any automatic propagation of convergence: even if two<br>
> > >>>>> threads execute the same dynamic instances of an instruction, whether<br>
> > >>>>> they execute the same dynamic instance of the very next instruction in<br>
> > >>>>> straight-line code is implementation-defined in general (which means<br>
> > >>>>> that program transforms are free to make changes that could affect this).<br>
> > >>>>><br>
> > >>>>> We chose to use an operand bundle for the convergence token so that<br>
> > >>>>> generic code can identify the relevant token value easily. We<br>
> > >>>>> considered adding the convergence token as a regular function call<br>
> > >>>>> argument, but this makes it impractical for generic transforms to<br>
> > >>>>> understand functions that take both a convergence token and some other<br>
> > >>>>> token.<br>
> > >>>>><br>
> > >>>>> Cheers,<br>
> > >>>>> Nicolai<br>
> > >>>> --<br>
> > >>>> Hal Finkel<br>
> > >>>> Lead, Compiler Technology and Programming Languages<br>
> > >>>> Leadership Computing Facility<br>
> > >>>> Argonne National Laboratory<br>
> > >>>><br>
> > >> --<br>
> > >> Hal Finkel<br>
> > >> Lead, Compiler Technology and Programming Languages<br>
> > >> Leadership Computing Facility<br>
> > >> Argonne National Laboratory<br>
> > >><br>
> > ><br>
> > --<br>
> > Hal Finkel<br>
> > Lead, Compiler Technology and Programming Languages<br>
> > Leadership Computing Facility<br>
> > Argonne National Laboratory<br>
> ><br>
><br>
><br>
> --<br>
> Lerne, wie die Welt wirklich ist,<br>
> aber vergiss niemals, wie sie sein sollte.<br>
<br>
<br>
<br>
-- <br>
Lerne, wie die Welt wirklich ist,<br>
aber vergiss niemals, wie sie sein sollte.<br>
_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
<a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
</blockquote></div>