[llvm-dev] [RFC] Upstreaming PACXX (Programing Accelerators with C++)

Haidl, Michael via llvm-dev llvm-dev at lists.llvm.org
Mon Feb 5 22:27:24 PST 2018

> Interesting.
> I do something similar for D targeting CUDA (via NVPTX) and OpenCL (via my
> forward proved fork of Khronos’ SPIRV-LLVM)[1], except all the code
> generation is done at compile time. The runtime is aided by compile time
> reflection so that calling kernels is done by symbol.
> What kind of performance difference do you see running code that was not
> developed with GPU in mind (e.g. range-v3) vs code that was?
[Haidl, Michael] 
We extended range-v3 with a few GPU enabled algorithms to exploit especially views from range-v3 for execution on GPUs. While the kernels are clearly designed for GPUs mixing it with code like range-v3's views showed no negative performance impacts. We evaluated against thrust in the linked paper and were able to get on par with thrust. The views of range-v3 really come with zero-cost abstractions. 
> What restrictions do you apply? I assume virtual functions, recursion. What
> else?
[Haidl, Michael] 
Virtual functions are still a problem. Recursion works to some point (the stack frame size on the GPU is the limitation here). Since PACXX builds on CUDA and HIP we can assume that recursion is possible (with minor intervention of the developer setting the stack size right). 
Exception handling in kernels is currently not possible in PACXX. 
> How does pacxx's SPMD model differ from what one can do in LLVM at the
> moment?
[Haidl, Michael] 
There is not much difference. I have a little experimental branch that accepts CUDA as input code and compiles it with PACXX. The only problem are device specific stuff like nvptx intrinsics generated by clang for CUDA what makes a portable execution currently impossible.  
> Nic
> [1]: http://github.com/libmir/dcompute/
> > On 5 Feb 2018, at 7:11 am, Haidl, Michael via llvm-dev <llvm-
> dev at lists.llvm.org> wrote:
> >
> > HI LLVM comunity,
> >
> > after 3 years of development, various talks on LLVM-HPC and EuroLLVM and
> other scientific conferences I want to present my PhD research topic to the
> lists.
> >
> > The main goal for my research was to develop a single-source programming
> model equal to CUDA or SYCL for accelerators supported by LLVM (e.g., Nvidia
> GPUs). PACXX uses Clang as front-end for code generation and comes with a
> runtime library (PACXX-RT) to execute kernels on the available hardware.
> Currently, PACXX supports Nvidia GPUs through the NVPTX Target and CUDA,
> CPUs through MCJIT (including whole function vectorization thanks to RV [1])
> and has an experimental back-end for AMD GPUs using the AMDGPU Target
> and ROCm.
> >
> > The main idea behind PACXX is the use of the LLVM IR as kernel code
> representation which is integrated into the executable together with the
> PACXX-RT. At runtime of the program the PACXX-RT compiles the IR to the
> final MC level and hands it over to the device. Since, PACXX does currently not
> enforce any major restrictions on the C++ code we managed to run (almost)
> arbitrary C++ code on GPUs including range-v3 [2, 3].
> >
> > A short vector addition example using PACXX:
> >
> > ...
> >
> > Recently, I open sourced PACXX on github [3] under the same license LLVM
> is currently using.
> > Since my PhD is now in its final stage I wanted to ask if there is interest in
> having such an SPMD programming model upstreamed.
> > PACXX is currently on par with release_60 and only requires minor
> modifications to Clang, e.g., a command line switch, C++ attributes, some
> diagnostics and metadata generation during code gen.
> > The PACXX-RT can be integrated into the LLVM build system and may remain
> a standalone project. (BTW, may I ask to add PACXX to the LLVM projects?).
> >
> > Looking forward for your feedback.
> >
> > Cheers,
> > Michael Haidl
> >
> > [1] https://github.com/cdl-saarland/rv
> > [2] https://github.com/ericniebler/range-v3
> > [3] https://dl.acm.org/authorize?N20051
> > [4] https://github.com/pacxx/pacxx-llvm
> > _______________________________________________
> > LLVM Developers mailing list
> > llvm-dev at lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

More information about the llvm-dev mailing list