<div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Hi all,</div><div>After going through the source, I didn't find CUDA stream support.</div><div>Luckily, I only need to add</div><div>#define CUDA_API_PER_THREAD_DEFAULT_STREAM</div><div>before</div><div>#include <cuda.h></div><div>in libomptarget/plugins/cuda/src/rtl.cpp</div><div>Then the multiple target goes to different streams and may execute concurrently.<br></div><div><div>#pragma omp parallel</div><div>{</div><div> #pragma omp target</div><div> {</div><div> //offload computation<br></div><div> }<br></div><div>}</div><div>This is exactly I want.</div><div><br></div><div>I know the XL compiler uses streams in a different way but achieves similar effects.<br></div></div><div>Is there anyone working on using streams with openmp target in llvm?</div><div>Will clang-ykt get something similar to XL and upstream to the mainline?</div><div><br></div><div>If we just add #define CUDA_API_PER_THREAD_DEFAULT_STREAM in the cuda rtl, will it be a trouble?</div><div>As a compiler user, I'd like to have a better solution rather than having a patch just for myself.</div><div><br></div><div>Best,<br></div><div>Ye<br></div><div dir="ltr"><div><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr">===================<br>
Ye Luo, Ph.D.<br>Computational Science Division & Leadership Computing Facility<br>
Argonne National Laboratory</div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Ye Luo <<a href="mailto:xw111luoye@gmail.com">xw111luoye@gmail.com</a>> 于2019年3月17日周日 下午2:26写道:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi,</div><div>How to turn on streams when using OpenMP offload?</div><div>When different host threads individually start target regions (even not using nowait). The offloaded computation goes to different CUDA streams and may execute concurrently. This is currently available in XL.</div><div>With Clang, nvprof shows only the run only uses the default stream.</div><div>Is there a way to do that with Clang?</div><div>On the other hand,<br></div><div>nvcc has option<span class="gmail-m_-5563546046592982142gmail-st"> --<i>default</i>-<i>stream per</i>-<i>thread</i></span></div><div>I'm not familar with clang CUDA, is there a similar option?</div><div>Best,<br></div><div>Ye<br></div><div><div><div dir="ltr" class="gmail-m_-5563546046592982142gmail_signature"><div dir="ltr"><div><div dir="ltr">===================<br>
Ye Luo, Ph.D.<br>Computational Science Division & Leadership Computing Facility<br>
Argonne National Laboratory</div></div></div></div></div></div></div>
</blockquote></div></div></div></div></div></div>