[cfe-dev] [Openmp-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries

Jason Henline via cfe-dev cfe-dev at lists.llvm.org
Mon Mar 14 17:26:09 PDT 2016


Sergos,

It looks a cool thought on providing high-level interface for CUDA-based
development.  By far, my understanding is that the StreamExecutor is
designed as an abstract runtime layer of target architecture for
offloading, still requiring the target specific code from the user. This is
something that contradicts the PPM in my understanding: the abstraction of
the programming itself. To make the parallel: you provide an inline
assembly to enable a user with best performance/flexibility while OpenMP is
rather a Fortran compiler that gives abstraction of machine at all. This I
would name the biggest difference between the projects. Am I right?

Yes, it sounds to me that you have the right idea here.

The document you refers to is published a year ago and there was a
significant progress since then, with prototype compiler implemented at
https://github.com/clang-omp. The implementation supports NVIDIA and x86
targets, providing an abstraction of the target platform. There are other
parties contributed to the design, yet we will see open source
contributions from them. Nevertheless, I can say that the design was
thoroughly reviewed by them and it is approved by the full list of authors
- hence, their targets are satisfied also. As per the design document, the
abstraction layer is the libomptarget library supposed to dispatch
different target binaries at runtime. Along with dispatching the library
keeps track of all data mapping between the host and all target devices,
hiding this from the user and thus removes the burden of bookkeeping. This
can be a good point for the StreamExecutor to integrate its interface.
Still we need to review both sides: the libomptarget interface provided to
the compiler and the StreamExecutor internal interfaces.

Thanks for letting me know about the implementation on GitHub. I will take
a look at the code for libomptarget there to see how I think it could work
with StreamExecutor. In general terms, I really like the idea of
StreamExecutor being able to call into libomptarget rather than
implementing the offloading itself, so I think it will be great if we can
get those details to work out.

Per my understanding, the OpenMP standard allows calling of functions in
target regions. Those functions can be generated for target either by the
compiler, where user annotate appropriate functions with "#pragma omp
declare target". But it also allows using of functions added to the target
binaries in any other way. For example, on Xeon PHI platform one can use
any shared library that is put on the device beforehand. As for the GPU and
other targets – binaries obtained from other compilers/build tools should
be passed to the target image link explicitly.

I was not aware that OpenMP had a mode for running code compiled by a
different compiler. That sounds very nice. I would like to learn more about
what the user interface looks like for this, specifically in the case of
CUDA.

The OpenMP programming model was aimed the program is written once, and
 can be compiled with many compilers for multi-targets, in addition, with
the same compiler it can be compiled in serial mode. This fits well for
complex build systems and provides support to number of targets with no
additional interference with build scripts. My understanding is that  the
StreamExecutor requires different targets to be built separately and
thereafter put together to allow offloading.

Yes, I think that is an accurate representation of StreamExecutor. Our hope
is to integrate StreamExecutor into clang itself so that clang can manage
the bundling of device code in object files and the launching of that code.

I see some other differences in programming model as well, for example, the
executor project you referred to supports C++ only. Is there any plans to
support plain C and Fortran? I would suggest we do a more analysis to
extend the existing library for support StreamExecutor.

We are only interested in supporting C++. One of the main goals of
StreamExecutor is to create a nice interface specifically for C++.

To collaborate is always a plus and having more interested parties is
beneficial both ways. I would like to start from the interfaces description
and review. The interface of the libomptarget (both compiler side and the
target RTL) are in the document. Could you prepare a small overview of the
(perhaps, supposed) StreamExecutor internal interfaces with CUDA and OpenCL
so that we can derive key common points of abstraction and try to map them
to libomptarget interface?

Yes I will prepare a little overview of the internal StreamExecutor
interface to CUDA and OpenCL. This interface is already well defined, so I
will just need to copy and paste some things to a document. I'll plan to
have that completed some time tomorrow so you can see how StreamExecutor
would like to interact with libomptarget.

Thanks very much for your input on this,
-Jason

On Mon, Mar 14, 2016 at 12:48 PM Sergey Ostanevich <sergos.gnu at gmail.com>
wrote:

> Jason,
>
> It looks a cool thought on providing high-level interface for CUDA-based
> development.  By far, my understanding is that the StreamExecutor is
> designed as an abstract runtime layer of target architecture for
> offloading, still requiring the target specific code from the user. This is
> something that contradicts the PPM in my understanding: the abstraction of
> the programming itself. To make the parallel: you provide an inline
> assembly to enable a user with best performance/flexibility while OpenMP is
> rather a Fortran compiler that gives abstraction of machine at all. This I
> would name the biggest difference between the projects. Am I right?
>
> The document you refers to is published a year ago and there was a
> significant progress since then, with prototype compiler implemented at
> https://github.com/clang-omp. The implementation supports NVIDIA and x86
> targets, providing an abstraction of the target platform. There are other
> parties contributed to the design, yet we will see open source
> contributions from them. Nevertheless, I can say that the design was
> thoroughly reviewed by them and it is approved by the full list of authors
> - hence, their targets are satisfied also. As per the design document, the
> abstraction layer is the libomptarget library supposed to dispatch
> different target binaries at runtime. Along with dispatching the library
> keeps track of all data mapping between the host and all target devices,
> hiding this from the user and thus removes the burden of bookkeeping. This
> can be a good point for the StreamExecutor to integrate its interface.
> Still we need to review both sides: the libomptarget interface provided to
> the compiler and the StreamExecutor internal interfaces.
>
> > While the OpenMP model provides the convenience of allowing the author
> to write their kernel code in standard C/C++, the StreamExecutor model
> allows for the use of any kernel language (e.g. CUDA C++ or OpenCL C). This
> lets authors use  platform-specific features that are only present in
> platform-specific kernel definition languages.
>
>
>
> Per my understanding, the OpenMP standard allows calling of functions in
> target regions. Those functions can be generated for target either by the
> compiler, where user annotate appropriate functions with "#pragma omp
> declare target". But it also allows using of functions added to the target
> binaries in any other way. For example, on Xeon PHI platform one can use
> any shared library that is put on the device beforehand. As for the GPU and
> other targets – binaries obtained from other compilers/build tools should
> be passed to the target image link explicitly.
>
>
> The OpenMP programming model was aimed the program is written once, and
>  can be compiled with many compilers for multi-targets, in addition, with
> the same compiler it can be compiled in serial mode. This fits well for
> complex build systems and provides support to number of targets with no
> additional interference with build scripts. My understanding is that  the
> StreamExecutor requires different targets to be built separately and
> thereafter put together to allow offloading.
>
>
>
> I see some other differences in programming model as well, for example,
> the executor project you referred to supports C++ only. Is there any plans
> to support plain C and Fortran? I would suggest we do a more analysis to
> extend the existing library for support StreamExecutor.
>
>
> To collaborate is always a plus and having more interested parties is
> beneficial both ways. I would like to start from the interfaces description
> and review. The interface of the libomptarget (both compiler side and the
> target RTL) are in the document. Could you prepare a small overview of the
> (perhaps, supposed) StreamExecutor internal interfaces with CUDA and OpenCL
> so that we can derive key common points of abstraction and try to map them
> to libomptarget interface?
>
>
> Regards,
>
> Sergos
>
> Intel Compiler Team
>
>
>
> On Mon, Mar 14, 2016 at 8:50 PM, Jason Henline via cfe-dev <
> cfe-dev at lists.llvm.org> wrote:
>
>> I think it would be great if StreamExecutor could use liboffload to
>> perform its offloading under the hood. Right now offloading is handled in
>> StreamExecutor using platform plugins, so I think it could be very natural
>> for us to write a plugin which basically forwards to liboffload. If that
>> worked out, we could delete our current plugins and depend only on those
>> based on liboffload, and then all the offloading code would be unified.
>> Then, just as James said, StreamExecutor would provide a nice C++ interface
>> on top of liboffload, and liboffload could continue to support OpenMP
>> directly.
>>
>> In this plan, I think it would make sense to move liboffload to the new
>> project being proposed by this RFC, and hopefully that would also make
>> liboffload more usable as a stand-alone project. Before moving forward with
>> any of these plans, I think it is right to wait to hear what IBM thinks.
>>
>> On Mon, Mar 14, 2016 at 10:14 AM C Bergström <openmp-dev at lists.llvm.org>
>> wrote:
>>
>>> /* ignorable rant */
>>> I've publicly advocated it shouldn't have been there in the 1st place.
>>> I have been quite vocal the work wasn't for everyone else to pay, but
>>> should have been part of the initial design. (Basically getting it
>>> right the 1st time - instead of forcing someone else to wade through a
>>> bunch of cmake)
>>>
>>> On Tue, Mar 15, 2016 at 1:10 AM, Cownie, James H
>>> <james.h.cownie at intel.com> wrote:
>>> >> I'd support some of Jame's comments if liboffload wasn't glued to OMP
>>> as it is now.
>>> >
>>> > I certainly have no objection to moving liboffload elsewhere if that
>>> makes it more useful to people.
>>> > There is no real "glue" holding it there; it simply ended up in the
>>> OpenMP directory structure because that
>>> > was an easy place to put it, not because that's the optimal place for
>>> it.
>>> >
>>> > To some extent it has stayed there because no-one has put in any
>>> effort to do the work to move it.
>>> >
>>> > -- Jim
>>> >
>>> > James Cownie <james.h.cownie at intel.com>
>>> > SSG/DPD/TCAR (Technical Computing, Analyzers and Runtimes)
>>> > Tel: +44 117 9071438
>>> >
>>> > -----Original Message-----
>>> > From: C Bergström [mailto:cbergstrom at pathscale.com]
>>> > Sent: Monday, March 14, 2016 5:01 PM
>>> > To: Cownie, James H <james.h.cownie at intel.com>
>>> > Cc: llvm-dev <llvm-dev at lists.llvm.org>; cfe-dev <
>>> cfe-dev at lists.llvm.org>; openmp-dev at lists.llvm.org
>>> > Subject: Re: [cfe-dev] RFC: Proposing an LLVM subproject for
>>> parallelism runtime and support libraries
>>> >
>>> > I'd support some of Jame's comments if liboffload wasn't glued to OMP
>>> > as it is now. My attempts to decouple it into something with better
>>> > design layering and outside of OMP source repo, have failed. For it to
>>> > be advocated as "the" offload lib - it needs a home (imnsho) outside
>>> > of OMP. Somewhere that others can easily play with it and not pay the
>>> > OMP tax. It may tick some of the boxes which have been mentioned, but
>>> > I'm curious how well it does when put under real workloads.
>>> >
>>> > On Tue, Mar 15, 2016 at 12:53 AM, Cownie, James H via cfe-dev
>>> > <cfe-dev at lists.llvm.org> wrote:
>>> >> Jason,
>>> >>
>>> >>
>>> >>
>>> >> It’s great that Google are interested in contributing to the
>>> development of
>>> >> LLVM in this area, and that you have code to support offload.
>>> >>
>>> >> However, I’m not sure that all of it is needed, since LLVM already
>>> has the
>>> >> offload library which has been being developed in the context of
>>> OpenMP, but
>>> >> actually provides a general facility. It has been a part of LLVM
>>> since April
>>> >> 2014, and is already being used to offload to both Intel Xeon Phi and
>>> (at
>>> >> least NVidia) GPUs. (The IBM folks can tell you more about that!)
>>> >>
>>> >>
>>> >>
>>> >> The main difference I see (at a very first glance!) is that your
>>> >> StreamExecutor interfaces seem to be aimed more at end user code,
>>> whereas
>>> >> the interface to the existing offload library has not been designed
>>> for the
>>> >> user, but to be an interface from the compiler. That has advantages
>>> and
>>> >> disadvantages
>>> >>
>>> >> Advantages:
>>> >>
>>> >> ·         It is a C level interface, so is callable from C,C++ and
>>> Fortran
>>> >>
>>> >> Disadvantages:
>>> >>
>>> >> ·         Using it directly from C++ user code may be harder than
>>> using
>>> >> StreamExecutor.
>>> >>
>>> >>
>>> >>
>>> >> However, there is nothing in the interface that prevents it from
>>> being used
>>> >> with CUDA or OpenCL, and it already seems to support the low level
>>> features
>>> >> you cited as StreamExecutor’s advantages, though not the “looks just
>>> like
>>> >> CUDA” aspects, since it’s explicitly vendor neutral.
>>> >>
>>> >>
>>> >>
>>> >>> StreamExecutor:
>>> >>
>>> >>>
>>> >>
>>> >>> * abstracts the underlying accelerator platform (avoids locking you
>>> into a
>>> >>
>>> >>> single vendor, and lets you write code without thinking about which
>>> >>
>>> >>> platform you'll be running on).
>>> >>
>>> >> Liboffload does this (and has a specific design for how to abstract
>>> new
>>> >> devices and support them using device specific libraries).
>>> >>
>>> >>> * provides an open-source alternative to the CUDA runtime library.
>>> >>
>>> >> I am not a CUDA expert, so I can’t comment on this! As before, IBM
>>> should
>>> >> comment.
>>> >>
>>> >>> * gives users a stream management model whose terminology matches
>>> that of
>>> >>> the CUDA programming model.
>>> >>
>>> >> This is not abstract, but seems CUDA target specific, which is, if
>>> anything,
>>> >> worrying for a supposedly vendor-neutral interface!
>>> >>
>>> >>> * makes use of modern C++ to create a safe, efficient, easy-to-use
>>> >>> programming interface.
>>> >>
>>> >> No, because liboffload is an implementation layer, not intended to be
>>> >> user-visible.
>>> >>
>>> >>
>>> >>
>>> >>> StreamExecutor makes it easy to:
>>> >>
>>> >>>
>>> >>
>>> >>> * move data between host and accelerator (and also between peer
>>> >>> accelerators).
>>> >>
>>> >> Liboffload supports this.
>>> >>
>>> >>> * execute data-parallel kernels written in the OpenCL or CUDA kernel
>>> >>> languages.
>>> >>
>>> >> I believe this should be easy; IBM can comment better, since they
>>> have been
>>> >> working on GPU support.
>>> >>
>>> >>> * inspect the capabilities of a GPU-like device at runtime.
>>> >>
>>> >>> * manage multiple devices.
>>> >>
>>> >> Liboffload supports this.
>>> >>
>>> >>
>>> >>
>>> >> We’d therefore be very interested in seeing an approach that
>>> implemented a
>>> >> C++ specific user-friendly interface on top of the existing liboffload
>>> >> functionality, but we don’t see a reason to rework the OpenMP
>>> implementation
>>> >> to use StreamExecutor (since what LLVM already has is working fine,
>>> and
>>> >> supporting offload to both GPUs and Xeon Phi).
>>> >>
>>> >>
>>> >>
>>> >> -- Jim
>>> >>
>>> >> James Cownie <james.h.cownie at intel.com>
>>> >> SSG/DPD/TCAR (Technical Computing, Analyzers and Runtimes)
>>> >>
>>> >> Tel: +44 117 9071438
>>> >>
>>> >>
>>> >>
>>> >> ---------------------------------------------------------------------
>>> >> Intel Corporation (UK) Limited
>>> >> Registered No. 1134945 (England)
>>> >> Registered Office: Pipers Way, Swindon SN3 1RJ
>>> >> VAT No: 860 2173 47
>>> >>
>>> >> This e-mail and any attachments may contain confidential material for
>>> >> the sole use of the intended recipient(s). Any review or distribution
>>> >> by others is strictly prohibited. If you are not the intended
>>> >> recipient, please contact the sender and delete all copies.
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> cfe-dev mailing list
>>> >> cfe-dev at lists.llvm.org
>>> >> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>>> >>
>>> > ---------------------------------------------------------------------
>>> > Intel Corporation (UK) Limited
>>> > Registered No. 1134945 (England)
>>> > Registered Office: Pipers Way, Swindon SN3 1RJ
>>> > VAT No: 860 2173 47
>>> >
>>> > This e-mail and any attachments may contain confidential material for
>>> > the sole use of the intended recipient(s). Any review or distribution
>>> > by others is strictly prohibited. If you are not the intended
>>> > recipient, please contact the sender and delete all copies.
>>> _______________________________________________
>>> Openmp-dev mailing list
>>> Openmp-dev at lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/openmp-dev
>>>
>>
>> _______________________________________________
>> cfe-dev mailing list
>> cfe-dev at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20160315/3ad78c4e/attachment-0001.html>


More information about the cfe-dev mailing list