[cfe-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries
Cownie, James H via cfe-dev
cfe-dev at lists.llvm.org
Mon Mar 14 10:10:30 PDT 2016
> I'd support some of Jame's comments if liboffload wasn't glued to OMP as it is now.
I certainly have no objection to moving liboffload elsewhere if that makes it more useful to people.
There is no real "glue" holding it there; it simply ended up in the OpenMP directory structure because that
was an easy place to put it, not because that's the optimal place for it.
To some extent it has stayed there because no-one has put in any effort to do the work to move it.
-- Jim
James Cownie <james.h.cownie at intel.com>
SSG/DPD/TCAR (Technical Computing, Analyzers and Runtimes)
Tel: +44 117 9071438
-----Original Message-----
From: C Bergström [mailto:cbergstrom at pathscale.com]
Sent: Monday, March 14, 2016 5:01 PM
To: Cownie, James H <james.h.cownie at intel.com>
Cc: llvm-dev <llvm-dev at lists.llvm.org>; cfe-dev <cfe-dev at lists.llvm.org>; openmp-dev at lists.llvm.org
Subject: Re: [cfe-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries
I'd support some of Jame's comments if liboffload wasn't glued to OMP
as it is now. My attempts to decouple it into something with better
design layering and outside of OMP source repo, have failed. For it to
be advocated as "the" offload lib - it needs a home (imnsho) outside
of OMP. Somewhere that others can easily play with it and not pay the
OMP tax. It may tick some of the boxes which have been mentioned, but
I'm curious how well it does when put under real workloads.
On Tue, Mar 15, 2016 at 12:53 AM, Cownie, James H via cfe-dev
<cfe-dev at lists.llvm.org> wrote:
> Jason,
>
>
>
> It’s great that Google are interested in contributing to the development of
> LLVM in this area, and that you have code to support offload.
>
> However, I’m not sure that all of it is needed, since LLVM already has the
> offload library which has been being developed in the context of OpenMP, but
> actually provides a general facility. It has been a part of LLVM since April
> 2014, and is already being used to offload to both Intel Xeon Phi and (at
> least NVidia) GPUs. (The IBM folks can tell you more about that!)
>
>
>
> The main difference I see (at a very first glance!) is that your
> StreamExecutor interfaces seem to be aimed more at end user code, whereas
> the interface to the existing offload library has not been designed for the
> user, but to be an interface from the compiler. That has advantages and
> disadvantages
>
> Advantages:
>
> · It is a C level interface, so is callable from C,C++ and Fortran
>
> Disadvantages:
>
> · Using it directly from C++ user code may be harder than using
> StreamExecutor.
>
>
>
> However, there is nothing in the interface that prevents it from being used
> with CUDA or OpenCL, and it already seems to support the low level features
> you cited as StreamExecutor’s advantages, though not the “looks just like
> CUDA” aspects, since it’s explicitly vendor neutral.
>
>
>
>> StreamExecutor:
>
>>
>
>> * abstracts the underlying accelerator platform (avoids locking you into a
>
>> single vendor, and lets you write code without thinking about which
>
>> platform you'll be running on).
>
> Liboffload does this (and has a specific design for how to abstract new
> devices and support them using device specific libraries).
>
>> * provides an open-source alternative to the CUDA runtime library.
>
> I am not a CUDA expert, so I can’t comment on this! As before, IBM should
> comment.
>
>> * gives users a stream management model whose terminology matches that of
>> the CUDA programming model.
>
> This is not abstract, but seems CUDA target specific, which is, if anything,
> worrying for a supposedly vendor-neutral interface!
>
>> * makes use of modern C++ to create a safe, efficient, easy-to-use
>> programming interface.
>
> No, because liboffload is an implementation layer, not intended to be
> user-visible.
>
>
>
>> StreamExecutor makes it easy to:
>
>>
>
>> * move data between host and accelerator (and also between peer
>> accelerators).
>
> Liboffload supports this.
>
>> * execute data-parallel kernels written in the OpenCL or CUDA kernel
>> languages.
>
> I believe this should be easy; IBM can comment better, since they have been
> working on GPU support.
>
>> * inspect the capabilities of a GPU-like device at runtime.
>
>> * manage multiple devices.
>
> Liboffload supports this.
>
>
>
> We’d therefore be very interested in seeing an approach that implemented a
> C++ specific user-friendly interface on top of the existing liboffload
> functionality, but we don’t see a reason to rework the OpenMP implementation
> to use StreamExecutor (since what LLVM already has is working fine, and
> supporting offload to both GPUs and Xeon Phi).
>
>
>
> -- Jim
>
> James Cownie <james.h.cownie at intel.com>
> SSG/DPD/TCAR (Technical Computing, Analyzers and Runtimes)
>
> Tel: +44 117 9071438
>
>
>
> ---------------------------------------------------------------------
> Intel Corporation (UK) Limited
> Registered No. 1134945 (England)
> Registered Office: Pipers Way, Swindon SN3 1RJ
> VAT No: 860 2173 47
>
> This e-mail and any attachments may contain confidential material for
> the sole use of the intended recipient(s). Any review or distribution
> by others is strictly prohibited. If you are not the intended
> recipient, please contact the sender and delete all copies.
>
>
> _______________________________________________
> cfe-dev mailing list
> cfe-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>
---------------------------------------------------------------------
Intel Corporation (UK) Limited
Registered No. 1134945 (England)
Registered Office: Pipers Way, Swindon SN3 1RJ
VAT No: 860 2173 47
This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
More information about the cfe-dev
mailing list