[llvm-dev] JIT compiling CUDA source code

Geoff Levner via llvm-dev llvm-dev at lists.llvm.org
Tue Nov 17 09:39:44 PST 2020


We have an application that allows the user to compile and execute C++ code
on the fly, using Orc JIT v2, via the LLJIT class. And we would like to
extend it to allow the user to provide CUDA source code as well, for GPU
programming. But I am having a hard time figuring out how to do it.

To JIT compile C++ code, we do basically as follows:

1. call Driver::BuildCompilation(), which returns a clang Command to execute
2. create a CompilerInvocation using the arguments from the Command
3. create a CompilerInstance around the CompilerInvocation
4. use the CompilerInstance to execute an EmitLLVMOnlyAction
5. retrieve the resulting Module from the action and add it to the JIT

But to compile C++ requires only a single clang command. When you add CUDA
to the equation, you add several other steps. If you use the clang front
end to compile, clang does the following:

1. compiles the driver source code
2. compiles the resulting PTX code using the CUDA ptxas command
3. builds a "fat binary" using the CUDA fatbinary command
4. compiles the host source code and links in the fat binary

So my question is: how do we replicate that process in memory, to generate
modules that we can add to our JIT?

I am no CUDA expert, and not much of a clang expert either, so if anyone
out there can point me in the right direction, I would be grateful.

Geoff
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20201117/fa5b0ca0/attachment.html>


More information about the llvm-dev mailing list