<div dir="ltr">> <span style="color:rgb(33,33,33)">On a side note, it would also seem that each kernel call is parsed twice </span><span style="color:rgb(33,33,33)">(once with CUDAIsDevice set to true and false, respectively).</span><br style="color:rgb(33,33,33)"><div><span style="color:rgb(33,33,33)"><br></span></div><div><font color="#212121">Yes, in two separate invocations of clang -cc1 -- one for the host, one for the device.</font></div><div><font color="#212121"><br></font></div><div><font color="#212121">> </font><span style="color:rgb(33,33,33)">The "__device__" would also prevent us </span><span style="color:rgb(33,33,33)">from calling that overload of the stub from __host__ __device__ </span><span style="color:rgb(33,33,33)">functions (the host version would be changed to also take the exec </span><span style="color:rgb(33,33,33)">config in as params).</span></div><div><span style="color:rgb(33,33,33)"><br></span></div><div><span style="color:rgb(33,33,33)">Specifically, it's OK to call __device__ functions from __host__ __device__ functions, but only if the __host__ __device__ function is never codegen'ed for the host. Infrastructure for doing this already exists, so you shouldn't have to do anything.</span></div><div><span style="color:rgb(33,33,33)"><br></span></div><div><span style="color:rgb(33,33,33)">> I do not know how to mark the stubs as __host__ or </span><span style="color:rgb(33,33,33)">__device__ though.</span></div><div><span style="color:rgb(33,33,33)"><br></span></div><div><font color="#212121">It's __attribute__((device)), aka CUDADeviceAttr. See e.g. SemaDecl.cpp.</font></div><div><br></div><div><span style="color:rgb(33,33,33)">> Would the current implementation </span><span style="color:rgb(33,33,33)">then be able to correctly infer which stub to call based on __host__ and </span><span style="color:rgb(33,33,33)">__device__ attributes?</span></div><div><span style="color:rgb(33,33,33)"><br></span></div><div><font color="#212121">Yes. These attributes are part of the function signature, and clang chooses which function to call based on (somewhat convoluted) overloading rules. It should DTRT.</font></div><div><span style="color:rgb(33,33,33)"><br></span></div><div><span style="color:rgb(33,33,33)">> Is it possible to have the device side stub be </span><span style="color:rgb(33,33,33)">inlined?</span></div><div><span style="color:rgb(33,33,33)"><br></span></div><div><span style="color:rgb(33,33,33)">Yes, LLVM will do that for you automatically if it's profitable. If it turns out it's not inlining the function when it should, I'd recommend we look into fixing the heuristics rather than slapping an always_inline on the function.</span></div><div><span style="color:rgb(33,33,33)"><br></span></div><div><span style="color:rgb(33,33,33)">> Also, I would have to check whether we are compiling with at </span><span style="color:rgb(33,33,33)">least sm_35 before emitting the device stub at all.</span></div><div><span style="color:rgb(33,33,33)"><br></span></div><div><span style="color:rgb(33,33,33)">Right.</span></div><div><span style="color:rgb(33,33,33)"><br></span></div><div><div><font color="#212121">> </font><span style="color:rgb(33,33,33)">Would that be a reasonable approach?</span></div><div><span style="color:rgb(33,33,33)"><br></span></div><div><span style="color:rgb(33,33,33)">SGTM!</span></div></div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, Oct 19, 2017 at 11:53 AM Andre Reichenbach <<a href="mailto:A.Reiba@gmx.net">A.Reiba@gmx.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi everyone,<br>
<br>
I have been planning to implement CUDA dynamic parallelism in clang for<br>
a while now. I found some time to dig into clang's code recently in<br>
order to grasp how clang handles calls to kernels (thanks to Justin<br>
Lebar and Artem Belevich for pointing me to the most important<br>
locations!). I was wondering if there would be any expert on the CUDA<br>
code on the dev list that was willing to help me get a better<br>
understanding and clear out any misconceptions that I might have.<br>
<br>
<br>
Allow me to summarize how it works right now. It appears that the host<br>
side kernel calls work something like this:<br>
<br>
During parsing of function calls, when the <<< >>> syntax is<br>
encountered, a node is build that calls cudaConfigureCall using the<br>
execution configuration parameters. Then, a CUDAKernelCallExpr is build<br>
using the remaining function parameters, and the CallExpr for<br>
cudaConfigureCall is passed into that node.<br>
<br>
During CodeGen, the CUDAKernelCallExpr is translated into something like<br>
this:<br>
<br>
if ( cudaConfigureCall( .... ) )<br>
{<br>
<call to sub for the specific kernel referenced by the call expr><br>
}<br>
<br>
The stub for the kernel looks something like this:<br>
<br>
__host__ <something> kernel( arg1, arg2, ... argn ) {<br>
<br>
if( cudaSetupArgument( <stuff for arg0> ) ) {<br>
:<br>
if( cudaSetupArgument( <stuff for argn> ) ) {<br>
cudaLaunch( <kernelName> );<br>
} ... }<br>
<br>
}<br>
<br>
Did I miss anything important?<br>
<br>
On a side note, it would also seem that each kernel call is parsed twice<br>
(once with CUDAIsDevice set to true and false, respectively).<br>
<br>
<br>
<br>
For a device side kernel call, I would have to construct something like<br>
this instead (see e.g. CUDA dynamic parallel programming guide [1]), the<br>
first four parameters being the execution configuration:<br>
<br>
__device__ <something> kernel( gridDim, blockDim, sharedMem, stream,<br>
arg0, arg1, ..., argn )<br>
{<br>
<compute overall size and alignment for storing args to a buffer><br>
<br>
void * buf = cudaGetParameterBuffer( <alignment>, <size> );<br>
<br>
<copy args into buffer at proper offset and alignment><br>
<br>
cudaLaunchDevice( <kernel name>, buf, gridDim, blockDim, sharedMem,<br>
stream );<br>
}<br>
<br>
<br>
It would appear that I cannot setup the execution configuration before<br>
the call to the stub, so I need to pass it into the stub instead. Thus,<br>
I would have to store the expressions for evaluating these in the<br>
CUDAKernelCallExpr instead of the call to cudaConfigureCall. The latter<br>
has to be moved to the host stub. The "__device__" would also prevent us<br>
from calling that overload of the stub from __host__ __device__<br>
functions (the host version would be changed to also take the exec<br>
config in as params). I do not know how to mark the stubs as __host__ or<br>
__device__ though.<br>
<br>
Would that be a reasonable approach? Would the current implementation<br>
then be able to correctly infer which stub to call based on __host__ and<br>
__device__ attributes? Is it possible to have the device side stub be<br>
inlined? Function calls tend to cost a lot of registers in my<br>
experience, so actually calling a function like that might be quite<br>
expensive. Also, I would have to check whether we are compiling with at<br>
least sm_35 before emitting the device stub at all.<br>
<br>
I would appreciate any feedback!<br>
<br>
Regards,<br>
<br>
Andre Reichenbach<br>
<br>
<br>
[1]<br>
<a href="https://docs.nvidia.com/cuda/pdf/CUDA_Dynamic_Parallelism_Programming_Guide.pdf" rel="noreferrer" target="_blank">https://docs.nvidia.com/cuda/pdf/CUDA_Dynamic_Parallelism_Programming_Guide.pdf</a><br>
<br>
</blockquote></div>