<div dir="ltr"><div>Hal,<br></div><div>There is no concurrent data transfer because the hardware only has one pipe for each direction, D2H and H2D. It is anyway the bottleneck. I don't think we can workaround that and this is expected.<br></div><div>I do want data transferring over lapping with computation from different host threads(streams) and also computation overlapping with each other.</div><div>Here is what I got from nvprof. Both overlapping is happening.</div><div>Ye<br></div><div><div><img src="cid:ii_jthqqpf90" alt="Screenshot from 2019-03-20 16:48:31.png" width="564" height="468"><br></div></div><div><br></div><div><br></div><div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr">===================<br>
Ye Luo, Ph.D.<br>Computational Science Division & Leadership Computing Facility<br>
Argonne National Laboratory</div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Finkel, Hal J. <<a href="mailto:hfinkel@anl.gov">hfinkel@anl.gov</a>> 于2019年3月20日周三 下午4:37写道:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<p><br>
</p>
<div class="gmail-m_-183655538919931111moz-cite-prefix">On 3/20/19 2:55 PM, Alexandre Eichenberger wrote:<br>
</div>
<blockquote type="cite">
<div class="gmail-m_-183655538919931111socmaildefaultfont" dir="ltr" style="font-family:Arial,Helvetica,sans-serif;font-size:10.5pt">
<div dir="ltr">Ye, Hal,</div>
<div dir="ltr"> </div>
<div dir="ltr">Just using a distinct stream per thread does not give you necessarily concurrency for transfers. E.g. is the runtime holding a single map lock during the entire mapping (including the data transfer)? I don't recall precisely what LLVM does, but
I think that was the case. If so, you don't have any asynchrony during transfers, but you have a simple implementation. You could relax this by having one lock per mapped memory location, so that a second target mapping a same data would wait until the data
is moved. Or you can use CUDA events per data transfers, and having a second thread's target computation wait for that event.</div>
</div>
</blockquote>
<p><br>
</p>
<p>Ye, do you know if the benefit you're seeing is from concurrency in data transfers, or in compute, or both?<br>
</p>
<p><br>
</p>
<blockquote type="cite">
<div class="gmail-m_-183655538919931111socmaildefaultfont" dir="ltr" style="font-family:Arial,Helvetica,sans-serif;font-size:10.5pt">
<div dir="ltr"> </div>
<div dir="ltr">Also, early reports from our users were that multiple threads doing CUDA calls was bad because the driver's locking scheme was poor. Hopefully, this has been addressed since. I would conduct a simple experiment to test this, before you recommend
your users to go that way.</div>
</div>
</blockquote>
<p><br>
</p>
<p>My understanding from talking to folks at NVIDIA is that this has been an area they've actively worked on improving. It may be better, although testing is certainly required.</p>
<p>Thanks again,</p>
<p>Hal<br>
</p>
<p><br>
</p>
<blockquote type="cite">
<div class="gmail-m_-183655538919931111socmaildefaultfont" dir="ltr" style="font-family:Arial,Helvetica,sans-serif;font-size:10.5pt">
<div dir="ltr"> </div>
<div dir="ltr">
<div class="gmail-m_-183655538919931111socmaildefaultfont" dir="ltr" style="font-family:Arial,Helvetica,sans-serif;font-size:10.5pt">
<div class="gmail-m_-183655538919931111socmaildefaultfont" dir="ltr" style="font-family:Arial,Helvetica,sans-serif;font-size:10.5pt">
<div class="gmail-m_-183655538919931111socmaildefaultfont" dir="ltr" style="font-family:Arial;font-size:10.5pt">
<div class="gmail-m_-183655538919931111socmaildefaultfont" dir="ltr" style="font-family:Arial;font-size:10.5pt">
<div class="gmail-m_-183655538919931111socmaildefaultfont" dir="ltr" style="font-family:Arial;font-size:10.5pt">
<div class="gmail-m_-183655538919931111socmaildefaultfont" dir="ltr" style="font-family:Arial;font-size:10.5pt">
<div dir="ltr"><br>
Alexandre<br>
<br>
-----------------------------------------------------------------------------------------------------<br>
<span style="color:rgb(0,0,205)">Alexandre Eichenberger, Principal RSM, Advanced Compiler Technologies</span><br>
<span style="color:rgb(0,0,205)">- research</span>: compiler optimization (OpenMP, GPU, SIMD)<br>
<span style="color:rgb(0,0,205)">- info:</span> <a class="gmail-m_-183655538919931111moz-txt-link-abbreviated" href="mailto:alexe@us.ibm.com" target="_blank">alexe@us.ibm.com</a> <a class="gmail-m_-183655538919931111moz-txt-link-freetext" href="http://www.research.ibm.com/people/a/alexe" target="_blank">http://www.research.ibm.com/people/a/alexe</a><br>
<span style="color:rgb(0,0,205)">- phone</span>: 914-945-1812 (work), 914-312-3618 (cell)</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div dir="ltr"> </div>
<div dir="ltr"> </div>
<blockquote dir="ltr" style="border-left:2px solid rgb(170,170,170);margin-left:5px;padding-left:5px;direction:ltr;margin-right:0px">
----- Original message -----<br>
From: Ye Luo <a class="gmail-m_-183655538919931111moz-txt-link-rfc2396E" href="mailto:xw111luoye@gmail.com" target="_blank">
<xw111luoye@gmail.com></a><br>
To: Alexandre Eichenberger <a class="gmail-m_-183655538919931111moz-txt-link-rfc2396E" href="mailto:alexe@us.ibm.com" target="_blank">
<alexe@us.ibm.com></a><br>
Cc: <a class="gmail-m_-183655538919931111moz-txt-link-abbreviated" href="mailto:hfinkel@anl.gov" target="_blank">hfinkel@anl.gov</a>,
<a class="gmail-m_-183655538919931111moz-txt-link-abbreviated" href="mailto:a.bataev@hotmail.com" target="_blank">a.bataev@hotmail.com</a>,
<a class="gmail-m_-183655538919931111moz-txt-link-abbreviated" href="mailto:jdoerfert@anl.gov" target="_blank">jdoerfert@anl.gov</a>,
<a class="gmail-m_-183655538919931111moz-txt-link-abbreviated" href="mailto:openmp-dev@lists.llvm.org" target="_blank">openmp-dev@lists.llvm.org</a>, Gheorghe-Teod Bercea
<a class="gmail-m_-183655538919931111moz-txt-link-rfc2396E" href="mailto:Gheorghe-Teod.Bercea@ibm.com" target="_blank"><Gheorghe-Teod.Bercea@ibm.com></a>, "Kevin K O'Brien"
<a class="gmail-m_-183655538919931111moz-txt-link-rfc2396E" href="mailto:caomhin@us.ibm.com" target="_blank"><caomhin@us.ibm.com></a><br>
Subject: Re: [Openmp-dev] OpenMP offload implicitly using streams<br>
Date: Wed, Mar 20, 2019 3:28 PM<br>
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">Thank you Alex for the explanation. It is a very nice read to understand how to fully async offload working. I believe this is the eventual goal. But it takes time to get in.</div>
<div dir="ltr"> </div>
<div>At the moment, I'm looking for a way which just needs only blocking offload but gets concurrent execution.</div>
<div>My code pattern is</div>
<div>#pragma omp parallel</div>
<div>{ // parallel is at the very high level of the code hierarchy and contains loops over target regions</div>
<div> #pragma omp target</div>
<div> { // target is very local</div>
<div> //offload computation<br>
}<br>
}</div>
<div>Using 1 stream per host thread easily achieves concurrent execution.</div>
<div> </div>
<div>I noticed some presentations showing a pipelining offload example using nowait</div>
<div><a href="http://on-demand.gputechconf.com/gtc/2018/presentation/s8344-openmp-on-gpus-first-experiences-and-best-practices.pdf" target="_blank">http://on-demand.gputechconf.com/gtc/2018/presentation/s8344-openmp-on-gpus-first-experiences-and-best-practices.pdf</a></div>
<div>slide 30.</div>
<div>If we enable one stream per thread. The pipelining can work as well even with blocking offload.</div>
<div>
<div>#pragma omp parallel for</div>
<div>for(int i=0; i<nblocks; i++)</div>
<div>{</div>
<div> #pragma omp target</div>
<div> {</div>
<div> //offload the computation of one block<br>
}</div>
<div>}</div>
<div> </div>
<div> </div>
<div>So I'm thinking of turning on one stream per thread to gain something but I'm not aware of any negative side.</div>
</div>
<div> </div>
<div>Best,</div>
<div>Ye</div>
<div dir="ltr">
<div>
<div>
<div dir="ltr">
<div dir="ltr">
<div>
<div dir="ltr">===================<br>
Ye Luo, Ph.D.<br>
Computational Science Division & Leadership Computing Facility<br>
Argonne National Laboratory</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div>
<div dir="ltr">Alexandre Eichenberger <<a href="mailto:alexe@us.ibm.com" target="_blank">alexe@us.ibm.com</a>> 于2019年3月20日周三 下午1:26写道:</div>
<blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr" style="font-family:Arial,Helvetica,sans-serif;font-size:10.5pt">
<div dir="ltr">Hal,</div>
<div dir="ltr"> </div>
<div dir="ltr">Supporting async for targets is not trivial. Right now, since everything is "blocking" a target can be easily separated into 5 tasks:</div>
<div dir="ltr"> </div>
<ol dir="ltr">
<li>wait for dependences to resolve</li><li>perform all the copy to the device</li><li>execute the target </li><li>perform all the copy from device to host</li><li>resolve all dependences for other tasks</li></ol>
<div dir="ltr">Going async ala LOMP has many advantages: all operations are asynchronous, and dependences from target to target tasks are directly enforced on the device. To us, this was the only way that users could effectively hide the high overhead of generating
targets, by enqueuing many dependent target tasks on the host, to prime the pipe of targets on the devices.</div>
<div dir="ltr"> </div>
<div dir="ltr">To do so, I believe you have to do the following.</div>
<ol dir="ltr">
<li>wait for all the host dependences; dependences from other device targets are tabulated but not waited for</li><li>select a stream (of one stream from a dependent target, if any) and enqueue wait for event for all tabulated dependences</li><li>enqueue all copy to device on stream (or enqueue sync event for data currently being copied over by other targets)</li><li>enqueue computation on stream</li><li>enqueue all copy from device on stream (this is speculative, as ref count may increase by another target executed before the data is actually copied back, but it's legal)</li><li>cleanup
<ol>
<li>blocking: wait for stream to be finished</li><li>non-blocking: have a callback from CUDA (which involve a separate thread) or have active polling by OpenMP threads when doing nothing and/or before doing a subsequent target task to determine when stream is finished</li><li>when 1 or 2 above are finished, cleanup the map data structures, resolve dependences for dependent tasks.</li></ol>
</li></ol>
<div dir="ltr">This is compounded by the fact that async data movements are only performed with pinned memory, and any CUDA memory cannot be allocated directly as it is a synchronizing event. So runtime must handle it's own pool of device and pinned memory,
which requires additional work in Steps 3, 5, and 6.3 above.</div>
<div dir="ltr">
<div dir="ltr" style="font-family:Arial,Helvetica,sans-serif;font-size:10.5pt">
<div dir="ltr" style="font-family:Arial,Helvetica,sans-serif;font-size:10.5pt">
<div dir="ltr" style="font-family:Arial;font-size:10.5pt">
<div dir="ltr" style="font-family:Arial;font-size:10.5pt">
<div dir="ltr" style="font-family:Arial;font-size:10.5pt">
<div dir="ltr" style="font-family:Arial;font-size:10.5pt">
<div dir="ltr"> </div>
<div dir="ltr">To perform the cleanup in Step 6, you also need to cache all info associated with a target in a dedicated data structure.</div>
<div dir="ltr"> </div>
<div dir="ltr">As you may have noticed, if you want some async to work, you have basically to treat all target as async; the synchronous ones differ only by having an explicit wait in Step 6.1. So all this handling is in the critical path.</div>
<div dir="ltr"> </div>
<div dir="ltr">You will also need to carefully managed CUDA events associated with any explicit data movements, as subsequent target operations may be dependent on an actual memory operation to complete (in either directions). </div>
<div dir="ltr"> </div>
<div dir="ltr">This has been done in LOMP, was it fun, maybe not, but it's all feasible. </div>
<div dir="ltr"> </div>
<div dir="ltr">There is a possible saving grace, namely that you could implement async only under unified memory, which would simplify greatly the whole thing: eliminate Steps 3 & 5 above and associated bookkeeping.</div>
<div dir="ltr"> </div>
<div dir="ltr">However, most application writers that have optimized their code will tell you that unified-only program tend not to work too well, and that hybrid models (copy predictable data, use unified for unstructured data) is likely to deliver better
performance. So you could simplify your implementation at the cost of precluding async for the most optimized programs.</div>
<div dir="ltr"> </div>
<div dir="ltr">Happy to discuss it further, and explore with you alternative implementations.</div>
<div dir="ltr"><br>
Alexandre<br>
<br>
-----------------------------------------------------------------------------------------------------<br>
<span style="color:rgb(0,0,205)">Alexandre Eichenberger, Principal RSM, Advanced Compiler Technologies</span><br>
<span style="color:rgb(0,0,205)">- research</span>: compiler optimization (OpenMP, GPU, SIMD)<br>
<span style="color:rgb(0,0,205)">- info:</span> <a href="mailto:alexe@us.ibm.com" target="_blank">alexe@us.ibm.com</a> <a href="http://www.research.ibm.com/people/a/alexe" target="_blank">http://www.research.ibm.com/people/a/alexe</a><br>
<span style="color:rgb(0,0,205)">- phone</span>: 914-945-1812 (work), 914-312-3618 (cell)</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div dir="ltr"> </div>
<div dir="ltr"> </div>
<blockquote dir="ltr" style="border-left:2px solid rgb(170,170,170);margin-left:5px;padding-left:5px;direction:ltr;margin-right:0px">
----- Original message -----<br>
From: Gheorghe-Teod Bercea/US/IBM<br>
To: "Finkel, Hal J." <<a href="mailto:hfinkel@anl.gov" target="_blank">hfinkel@anl.gov</a>><br>
Cc: Alexey Bataev <<a href="mailto:a.bataev@hotmail.com" target="_blank">a.bataev@hotmail.com</a>>, "Doerfert, Johannes" <<a href="mailto:jdoerfert@anl.gov" target="_blank">jdoerfert@anl.gov</a>>, "<a href="mailto:openmp-dev@lists.llvm.org" target="_blank">openmp-dev@lists.llvm.org</a>"
<<a href="mailto:openmp-dev@lists.llvm.org" target="_blank">openmp-dev@lists.llvm.org</a>>, Ye Luo <<a href="mailto:xw111luoye@gmail.com" target="_blank">xw111luoye@gmail.com</a>>, Alexandre Eichenberger/Watson/IBM@IBMUS<br>
Subject: Re: [Openmp-dev] OpenMP offload implicitly using streams<br>
Date: Wed, Mar 20, 2019 1:49 PM<br>
<br>
<font size="2">I'm adding Alex to this thread. He should be able to shed some light on this issue.</font><br>
<br>
<font size="2">Thanks,</font><br>
<br>
<font size="2">--Doru</font><br>
<br>
<br>
<img alt="Inactive hide details for "Finkel, Hal
J." ---03/20/2019 01:13:33 PM---Thanks, Ye. I
suppose that I thought it always worked th" width="16" height="16" border="0"><font size="2" color="#424282">"Finkel,
Hal J." ---03/20/2019 01:13:33 PM---Thanks, Ye. I suppose that I thought it always worked that way :-) Alexey, Doru, do you know if ther</font><br>
<br>
<font size="2" color="#5F5F5F">From: </font><font size="2">"Finkel, Hal J." <<a href="mailto:hfinkel@anl.gov" target="_blank">hfinkel@anl.gov</a>></font><br>
<font size="2" color="#5F5F5F">To: </font><font size="2">Ye Luo <<a href="mailto:xw111luoye@gmail.com" target="_blank">xw111luoye@gmail.com</a>></font><br>
<font size="2" color="#5F5F5F">Cc: </font><font size="2">"<a href="mailto:openmp-dev@lists.llvm.org" target="_blank">openmp-dev@lists.llvm.org</a>" <<a href="mailto:openmp-dev@lists.llvm.org" target="_blank">openmp-dev@lists.llvm.org</a>>,
Alexey Bataev <<a href="mailto:a.bataev@hotmail.com" target="_blank">a.bataev@hotmail.com</a>>, Gheorghe-Teod Bercea <<a href="mailto:gheorghe-teod.bercea@ibm.com" target="_blank">gheorghe-teod.bercea@ibm.com</a>>,
"Doerfert, Johannes" <<a href="mailto:jdoerfert@anl.gov" target="_blank">jdoerfert@anl.gov</a>></font><br>
<font size="2" color="#5F5F5F">Date: </font><font size="2">03/20/2019 01:13 PM</font><br>
<font size="2" color="#5F5F5F">Subject: </font><font size="2">Re: [Openmp-dev] OpenMP offload implicitly using streams</font>
<hr style="color:rgb(128,145,165)" width="100%" size="2" align="left">
<br>
<br>
<br>
<font size="3">Thanks, Ye. I suppose that I thought it always worked that way :-)</font>
<p><font size="3">Alexey, Doru, do you know if there's any semantic problem or other concerns with enabling this option and/or making it the default?</font></p>
<p><font size="3"> -Hal</font></p>
<p><font size="3">On 3/20/19 11:32 AM, Ye Luo via Openmp-dev wrote:</font></p>
<ul style="padding-left:36pt;margin-left:0px;list-style-type:none">
<li><font size="3">Hi all,</font><br>
<font size="3">After going through the source, I didn't find CUDA stream support.</font><br>
<font size="3">Luckily, I only need to add</font><br>
<font size="3">#define CUDA_API_PER_THREAD_DEFAULT_STREAM</font><br>
<font size="3">before</font><br>
<font size="3">#include <cuda.h></font><br>
<font size="3">in libomptarget/plugins/cuda/src/rtl.cpp</font><br>
<font size="3">Then the multiple target goes to different streams and may execute concurrently.</font><br>
<font size="3">#pragma omp parallel</font><br>
<font size="3">{</font><br>
<font size="3"> #pragma omp target</font><br>
<font size="3"> {</font><br>
<font size="3"> //offload computation</font><br>
<font size="3"> }</font><br>
<font size="3">}</font><br>
<font size="3">This is exactly I want.</font><br>
<br>
<font size="3">I know the XL compiler uses streams in a different way but achieves similar effects.</font><br>
<font size="3">Is there anyone working on using streams with openmp target in llvm?</font><br>
<font size="3">Will clang-ykt get something similar to XL and upstream to the mainline?</font><br>
<br>
<font size="3">If we just add #define CUDA_API_PER_THREAD_DEFAULT_STREAM in the cuda rtl, will it be a trouble?</font><br>
<font size="3">As a compiler user, I'd like to have a better solution rather than having a patch just for myself.</font><br>
<br>
<font size="3">Best,</font><br>
<font size="3">Ye</font><br>
<font size="3">===================<br>
Ye Luo, Ph.D.<br>
Computational Science Division & Leadership Computing Facility<br>
Argonne National Laboratory</font><br>
<br>
<br>
<font size="3">Ye Luo <</font><a href="mailto:xw111luoye@gmail.com" target="_blank"><u><font size="3" color="#0000FF">xw111luoye@gmail.com</font></u></a><font size="3">> 于2019年3月17日周日 下午2:26写道:</font>
<ul style="padding-left:9pt;margin-left:0px;list-style-type:none">
<li><font size="3">Hi,</font><br>
<font size="3">How to turn on streams when using OpenMP offload?</font><br>
<font size="3">When different host threads individually start target regions (even not using nowait). The offloaded computation goes to different CUDA streams and may execute concurrently. This is currently available in XL.</font><br>
<font size="3">With Clang, nvprof shows only the run only uses the default stream.</font><br>
<font size="3">Is there a way to do that with Clang?</font><br>
<font size="3">On the other hand,</font><br>
<font size="3">nvcc has option --</font><i><font size="3">default</font></i><font size="3">-</font><i><font size="3">stream per</font></i><font size="3">-</font><i><font size="3">thread</font></i><br>
<font size="3">I'm not familar with clang CUDA, is there a similar option?</font><br>
<font size="3">Best,</font><br>
<font size="3">Ye</font><br>
<font size="3">===================<br>
Ye Luo, Ph.D.<br>
Computational Science Division & Leadership Computing Facility<br>
Argonne National Laboratory</font></li></ul>
<br>
<tt><font size="3" face="">_______________________________________________<br>
Openmp-dev mailing list</font></tt><br>
<a href="mailto:Openmp-dev@lists.llvm.org" target="_blank"><tt><u><font size="3" face="" color="#0000FF">Openmp-dev@lists.llvm.org</font></u></tt></a><br>
<a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/openmp-dev" target="_blank"><tt><u><font size="3" face="" color="#0000FF">https://lists.llvm.org/cgi-bin/mailman/listinfo/openmp-dev</font></u></tt></a></li></ul>
<tt><font size="3" face="">--<br>
Hal Finkel<br>
Lead, Compiler Technology and Programming Languages<br>
Leadership Computing Facility<br>
Argonne National Laboratory</font></tt></blockquote>
<div dir="ltr"> </div>
</div>
</blockquote>
</div>
</blockquote>
<div dir="ltr"> </div>
</div>
<br>
</blockquote>
<pre class="gmail-m_-183655538919931111moz-signature" cols="72">--
Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory</pre>
</div>
</blockquote></div>