[llvm-dev] [RFC] Upstreaming PACXX (Programing Accelerators with C++)

Renato Golin via llvm-dev llvm-dev at lists.llvm.org
Mon Feb 5 06:51:41 PST 2018


I was going to say, this reminds me of Kai's presentation at Fosdem yesterday.

https://fosdem.org/2018/schedule/event/heterogenousd/

It's always good to see the cross-architecture power of LLVM being
used in creative ways! :)

cheers,
--renato

On 5 February 2018 at 13:35, Nicholas Wilson via llvm-dev
<llvm-dev at lists.llvm.org> wrote:
> Interesting.
>
> I do something similar for D targeting CUDA (via NVPTX) and OpenCL (via my forward proved fork of Khronos’ SPIRV-LLVM)[1], except all the code generation is done at compile time. The runtime is aided by compile time reflection so that calling kernels is done by symbol.
>
> What kind of performance difference do you see running code that was not developed with GPU in mind (e.g. range-v3) vs code that was?
> What restrictions do you apply? I assume virtual functions, recursion. What else?
>
> How does pacxx's SPMD model differ from what one can do in LLVM at the moment?
>
> Nic
>
> [1]: http://github.com/libmir/dcompute/
>
>> On 5 Feb 2018, at 7:11 am, Haidl, Michael via llvm-dev <llvm-dev at lists.llvm.org> wrote:
>>
>> HI LLVM comunity,
>>
>> after 3 years of development, various talks on LLVM-HPC and EuroLLVM and other scientific conferences I want to present my PhD research topic to the lists.
>>
>> The main goal for my research was to develop a single-source programming model equal to CUDA or SYCL for accelerators supported by LLVM (e.g., Nvidia GPUs). PACXX uses Clang as front-end for code generation and comes with a runtime library (PACXX-RT) to execute kernels on the available hardware. Currently, PACXX supports Nvidia GPUs through the NVPTX Target and CUDA, CPUs through MCJIT (including whole function vectorization thanks to RV [1]) and has an experimental back-end for AMD GPUs using the AMDGPU Target and ROCm.
>>
>> The main idea behind PACXX is the use of the LLVM IR as kernel code representation which is integrated into the executable together with the PACXX-RT. At runtime of the program the PACXX-RT compiles the IR to the final MC level and hands it over to the device. Since, PACXX does currently not enforce any major restrictions on the C++ code we managed to run (almost) arbitrary C++ code on GPUs including range-v3 [2, 3].
>>
>> A short vector addition example using PACXX:
>>
>> ...
>>
>> Recently, I open sourced PACXX on github [3] under the same license LLVM is currently using.
>> Since my PhD is now in its final stage I wanted to ask if there is interest in having such an SPMD programming model upstreamed.
>> PACXX is currently on par with release_60 and only requires minor modifications to Clang, e.g., a command line switch, C++ attributes, some diagnostics and metadata generation during code gen.
>> The PACXX-RT can be integrated into the LLVM build system and may remain a standalone project. (BTW, may I ask to add PACXX to the LLVM projects?).
>>
>> Looking forward for your feedback.
>>
>> Cheers,
>> Michael Haidl
>>
>> [1] https://github.com/cdl-saarland/rv
>> [2] https://github.com/ericniebler/range-v3
>> [3] https://dl.acm.org/authorize?N20051
>> [4] https://github.com/pacxx/pacxx-llvm
>> _______________________________________________
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev



-- 
cheers,
--renato


More information about the llvm-dev mailing list