[cfe-dev] [RFC] Upstreaming PACXX (Programing Accelerators with C++)

Haidl, Michael via cfe-dev cfe-dev at lists.llvm.org
Mon Feb 5 22:41:56 PST 2018


HI Jeff,

thanks for your answers. Comments are inlined 😉

Von: Jeff Hammond [mailto:jeff.science at gmail.com]
Gesendet: Dienstag, 6. Februar 2018 07:06
An: Haidl, Michael <michael.haidl at uni-muenster.de>
Cc: cfe-dev at lists.llvm.org
Betreff: Re: [cfe-dev] [RFC] Upstreaming PACXX (Programing Accelerators with C++)

This is cool.  I'm very glad to see your PhD research was done in a production environment and that you are open-sourcing it.

PACXX looks a lot like SYCL.  Have you considered whether it can be evolved into a high-quality implementation of the SYCL standard API?  There's a lot of value in implementing a standardized API with multiple implementations.  I've used ComputeCpp, triSYCL, and sycl-gtx recently, and each has limitations that could be overcome by having SYCL support in Clang/LLVM.
[Haidl, Michael]
SYCL and PACXX share the same goal. When I started my work SYCL was just a preliminary draft and came a little bit to late for me. I think building SYCL on top of PACXX should be possible and an open source implementation of SYCL in Clang/LLVM would really be a good thing.

Your contribution of a good SYCL implementation would be really valuable, since there appears to be no implementation that both performs well and is open-source.  I completely understand if you have no time in turn PACXX into SYCL, but if you have technical arguments against doing so even if time permitted, I think they'd be useful to share, although perhaps not in this forum.
[Haidl, Michael]
I don’t see any showstoppers of building the SYCL interface on top of the PACXX runtime. I knot SYCL a little bit and the standard seems a lot more restrictive than what PACXX can do, e.g., SYCL enforces standard-type layout for types used in a kernel which is not the case for PACXX since host and device compiler are the same and there are no differences between the host layout of a type and the device side.

On a practical level, PACXX seems to require some software hardening.  I tried to follow the docs and build it locally, but had some issues (GitHub issues were created, so I'll omit details here) and ultimately failed to compile your example program.  I'd love to be able to try it out, since I've recently evaluated SYCL and Boost.Compute using https://github.com/ParRes/Kernels, and it seems like PACXX is a peer of these.
[Haidl, Michael] I noticed your issues and I will look into it.

Best,

Jeff


On Sun, Feb 4, 2018 at 11:50 PM, Haidl, Michael via cfe-dev <cfe-dev at lists.llvm.org<mailto:cfe-dev at lists.llvm.org>> wrote:
HI LLVM comunity,

after 3 years of development, various talks on LLVM-HPC and EuroLLVM and other scientific conferences I want to present my PhD research topic to the lists.

The main goal for my research was to develop a single-source programming model equal to CUDA or SYCL for accelerators supported by LLVM (e.g., Nvidia GPUs). PACXX uses Clang as front-end for code generation and comes with a runtime library (PACXX-RT) to execute kernels on the available hardware. Currently, PACXX supports Nvidia GPUs through the NVPTX Target and CUDA, CPUs through MCJIT (including whole function vectorization thanks to RV [1]) and has an experimental back-end for AMD GPUs using the AMDGPU Target and ROCm.

The main idea behind PACXX is the use of the LLVM IR as kernel code representation which is integrated into the executable together with the PACXX-RT. At runtime of the program the PACXX-RT compiles the IR to the final MC level and hands it over to the device. Since, PACXX does currently not enforce any major restrictions on the C++ code we managed to run (almost) arbitrary C++ code on GPUs including range-v3 [2, 3].

A short vector addition example using PACXX:

using namespace pacxx::v2;
int main(int argc, char *argv[]) {
   // get the default executor
   auto &exec = Executor::get();
    size_t size = 128;
    std::vector<int> a(size, 1);
    std::vector<int> b(size, 2);
    std::vector<int> c(size, 0);

    // allocate device side memory
    auto &da = exec.allocate<int>(a.size());
    auto &db = exec.allocate<int>(b.size());
    auto &dc = exec.allocate<int>(c.size());
    // copy data to the accelerator
    da.upload(a);
    db.upload(b);
    dc.upload(c);
    // get the raw pointer
    auto pa = da.get();
    auto pb = db.get();
    auto pc = dc.get();

    // define the computation
    auto vadd = [=](auto &config) {
      auto i = config.get_global(0);
      if (i < size)
       pc[i] = pa[i] + pb[i];
    };

    // launch and synchronize
    std::promise<void> promise;
    auto future = exec.launch(vadd, {{1}, {128}}, promise);
    future.wait();
    // copy back the data
    dc.download(c);
}

Recently, I open sourced PACXX on github [3] under the same license LLVM is currently using.
Since my PhD is now in its final stage I wanted to ask if there is interest in having such an SPMD programming model upstreamed.
PACXX is currently on par with release_60 and only requires minor modifications to Clang, e.g., a command line switch, C++ attributes, some diagnostics and metadata generation during code gen.
The PACXX-RT can be integrated into the LLVM build system and may remain a standalone project. (BTW, may I ask to add PACXX to the LLVM projects?).

Looking forward for your feedback.

Cheers,
Michael Haidl

[1] https://github.com/cdl-saarland/rv
[2] https://github.com/ericniebler/range-v3
[3] https://dl.acm.org/authorize?N20051
[4] https://github.com/pacxx/pacxx-llvm
_______________________________________________
cfe-dev mailing list
cfe-dev at lists.llvm.org<mailto:cfe-dev at lists.llvm.org>
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev



--
Jeff Hammond
jeff.science at gmail.com<mailto:jeff.science at gmail.com>
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20180206/cb0183ab/attachment.html>


More information about the cfe-dev mailing list