<font size=2 face="sans-serif">I'm adding Alex to this thread. He should
be able to shed some light on this issue.</font><br><br><font size=2 face="sans-serif">Thanks,</font><br><br><font size=2 face="sans-serif">--Doru<br></font><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">"Finkel, Hal J."
<hfinkel@anl.gov></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">Ye Luo <xw111luoye@gmail.com></font><br><font size=1 color=#5f5f5f face="sans-serif">Cc:
</font><font size=1 face="sans-serif">"openmp-dev@lists.llvm.org"
<openmp-dev@lists.llvm.org>, Alexey Bataev <a.bataev@hotmail.com>,
Gheorghe-Teod Bercea <gheorghe-teod.bercea@ibm.com>, "Doerfert,
Johannes" <jdoerfert@anl.gov></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">03/20/2019 01:13 PM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Re: [Openmp-dev]
OpenMP offload implicitly using streams</font><br><hr noshade><br><br><br><font size=3>Thanks, Ye. I suppose that I thought it always worked
that way :-)</font><p><font size=3>Alexey, Doru, do you know if there's any semantic problem
or other concerns with enabling this option and/or making it the default?</font><p><font size=3> -Hal</font><p><font size=3>On 3/20/19 11:32 AM, Ye Luo via Openmp-dev wrote:</font><br><font size=3>Hi all,</font><br><font size=3>After going through the source, I didn't find CUDA stream
support.</font><br><font size=3>Luckily, I only need to add</font><br><font size=3>#define CUDA_API_PER_THREAD_DEFAULT_STREAM</font><br><font size=3>before</font><br><font size=3>#include <cuda.h></font><br><font size=3>in libomptarget/plugins/cuda/src/rtl.cpp</font><br><font size=3>Then the multiple target goes to different streams and
may execute concurrently.</font><br><font size=3>#pragma omp parallel</font><br><font size=3>{</font><br><font size=3> #pragma omp target</font><br><font size=3> {</font><br><font size=3> //offload computation</font><br><font size=3> }</font><br><font size=3>}</font><br><font size=3>This is exactly I want.</font><br><br><font size=3>I know the XL compiler uses streams in a different way
but achieves similar effects.</font><br><font size=3>Is there anyone working on using streams with openmp target
in llvm?</font><br><font size=3>Will clang-ykt get something similar to XL and upstream
to the mainline?</font><br><br><font size=3>If we just add #define CUDA_API_PER_THREAD_DEFAULT_STREAM
in the cuda rtl, will it be a trouble?</font><br><font size=3>As a compiler user, I'd like to have a better solution
rather than having a patch just for myself.</font><br><br><font size=3>Best,</font><br><font size=3>Ye</font><br><font size=3>===================<br>Ye Luo, Ph.D.<br>Computational Science Division & Leadership Computing Facility<br>Argonne National Laboratory</font><br><br><br><font size=3>Ye Luo <</font><a href="mailto:xw111luoye@gmail.com"><font size=3 color=blue><u>xw111luoye@gmail.com</u></font></a><font size=3>>
$BP2(B2019$BG/(B3$B7n(B17$BF|<~F|(B $B2<8a(B2:26$B<LF;!'(B</font><br><font size=3>Hi,</font><br><font size=3>How to turn on streams when using OpenMP offload?</font><br><font size=3>When different host threads individually start target
regions (even not using nowait). The offloaded computation goes to different
CUDA streams and may execute concurrently. This is currently available
in XL.</font><br><font size=3>With Clang, nvprof shows only the run only uses the default
stream.</font><br><font size=3>Is there a way to do that with Clang?</font><br><font size=3>On the other hand,</font><br><font size=3>nvcc has option --<i>default</i>-<i>stream per</i>-<i>thread</i></font><br><font size=3>I'm not familar with clang CUDA, is there a similar option?</font><br><font size=3>Best,</font><br><font size=3>Ye</font><br><font size=3>===================<br>Ye Luo, Ph.D.<br>Computational Science Division & Leadership Computing Facility<br>Argonne National Laboratory</font><br><br><tt><font size=3>_______________________________________________<br>Openmp-dev mailing list<br></font></tt><a href="mailto:Openmp-dev@lists.llvm.org"><tt><font size=3 color=blue><u>Openmp-dev@lists.llvm.org</u></font></tt></a><tt><font size=3><br></font></tt><a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/openmp-dev"><tt><font size=3 color=blue><u>https://lists.llvm.org/cgi-bin/mailman/listinfo/openmp-dev</u></font></tt></a><tt><font size=3><br></font></tt><br><tt><font size=3>-- <br>Hal Finkel<br>Lead, Compiler Technology and Programming Languages<br>Leadership Computing Facility<br>Argonne National Laboratory</font></tt><br><br><BR>