[Openmp-commits] [PATCH] D108528: [OpenMP][Offloading] Add support for event related interfaces

Johannes Doerfert via Phabricator via Openmp-commits openmp-commits at lists.llvm.org
Mon Aug 23 15:24:20 PDT 2021


jdoerfert added a comment.

In D108528#2961216 <https://reviews.llvm.org/D108528#2961216>, @JonChesterfield wrote:

> In D108528#2961204 <https://reviews.llvm.org/D108528#2961204>, @jdoerfert wrote:
>
>> Wait, attach a wait into the AsyncObj stream to avoid anything added after runs before the event is fulfilled.
>
> Would 'fulfilled' mean the kernels that were launched before it have all completed?
>
> That probably involves a barrier packet on amdgpu. Needs to go on the same HSA queue as the associated kernels, which is probably what will be in the async info.

If you think of the event as a kernel on the stream/queue, it is fulfilled if it is "executed".

>> Sync, block until the event is fulfilled.
>
> That is probably doable without poking at the HSA queue for amdgpu but I'm not certain of it.
>
> Can I invoke debug printing as a justification for passing the async object to all of them? I probably want to be able to tell what the lifecycle of a given event is in terms of what functions it was passed to.

I don't mind either way, passing it or not. Event's are opaque too, you can even reference the AsyncInfo/stream/queue if you want. That said, I doubt you need the AsyncInfo to print stuff about the event. 
In CUDA we really only need both to make the connection, which happens two times: Put the event on the queue, put a wait for the event in the queue.
If HSA needs it for anything else, we can also change the API before a release w/o much hassle.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D108528/new/

https://reviews.llvm.org/D108528



More information about the Openmp-commits mailing list