[cfe-dev] [RFC][OpenMP][CUDA] Unified Offloading Support in Clang Driver

Justin Lebar via cfe-dev cfe-dev at lists.llvm.org
Thu Mar 3 14:09:54 PST 2016


Hi, I'm one of the people working on CUDA in clang.

In general I agree that the support for CUDA today is rather ad-hoc; it can
likely be improved.  However, there are many points in this proposal that I do
not understand.  Inasmuch as I think I understand it, I am concerned that it's
adding a new abstractions instead of fixing the existing ones, and that this
will result in a lot of additional complexity.

> a) Create toolchains for host and offload devices before creating the actions.
>
> The driver has to detect the employed programming models through the provided
> options (e.g. -fcuda or -fopenmp) or file extensions. For each host and
> offloading device and programming model, it should create a toolchain.

Seems sane to me.

> b) Keep the generation of Actions independent of the program model.
>
> In my view, the Actions should only depend on the compile phases requested by
> the user and the file extensions of the input files. Only the way those
> actions are interpreted to create jobs should be dependent on the programming
> model.  This would avoid complicating the actions creation with dependencies
> that only make sense to some programming models, which would make the
> implementation hard to scale when new programming models are to be adopted.

I don't quite understand what you're proposing here, or what you're trying to
accomplish with this change.

Perhaps it would help if you could give a concrete example of how this would
change e.g. CUDA or Mac universal binary compilation?

For example, in CUDA compilation, we have an action which says "compile
everything below here as cuda arch sm_35".  sm_35 comes from a command-line
flag, so as I understand your proposal, this could not be in the action graph,
because it doesn't come from the filename or the compile phases requested by
the user.  So, how will we express this notion that some actions should be
compiled for a particular arch?

> c) Use unbundling and bundling tools agnostic of the programming model.
>
> I propose a single change in the action creation and that is the creation of
> a “unbundling” and "bundling” action whose goal is to prevent the user to
> have to deal with multiple files generated from multiple toolchains (host
> toolchain and offloading devices’ toolchains) if he uses separate compilation
> in his build system.

I'm not sure I understand what "separate compilation" is here.  Do you mean, a
compilation strategy which outputs logically separate machine code for each
architecture, only to have this code combined at link time?  (In contrast to
how we currently compile CUDA, where the device code for a file is integrated
into the host code for that file at compile time?)

If that's right, then what I understand you're proposing here is that, instead
of outputting N different object files -- one for the host, and N-1 for all our
device architectures -- we'd just output one blob which clang would understand
how to handle.

For my part, I am highly wary of introducing a new file format into clang's
output.  Historically, clang (along with other compilers) does not output
proprietary blobs.  Instead, we output object files in well-understood,
interoperable formats, such as ELF.  This is beneficial because there are lots
of existing tools which can handle these files.  It also allows e.g. code
compiled with clang to be linked with g++.

Build tools are universally awful, and I sympathize with the urge not to change
them.  But I don't think this is a business we want the compiler to be in.
Instead, if a user wants this kind of "fat object file", they could obtain one
by using a simple wrapper around clang.  If this wrapper's output format became
widely-used, we could then consider supporting it directly within clang, but
that's a proposition for many years in the future.

> d) Allow the target toolchain to request the host toolchain to be used for a given action.

Seems sane to me.

> e)  Use a job results cache to enable sharing results between device and host toolchains.

I don't understand why we need a cache for job results.  Why can we not set up
the Action graph such that each node has the correct inputs?  (You've actually
sketched exactly what I think the Action graph should look like, for CUDA and
OpenMP compilations.)

> f) Intercept the jobs creation before the emission of the command.
>
> In my view this is the only change required in the driver (apart from the
> obvious toolchain changes) that would be dependent on the programming model.
> A job result post-processing function could check that there are offloading
> toolchains to be used and spawn the jobs creation for those toolchains as
> well as append results from one toolchain to the results of some other
> accordingly to the programming model implementation needs.

Again it's not clear to me why we cannot and should not represent this in the
Action graph.  It's that graph that's supposed to tell us what we're going to
do.

> g) Reflect the offloading programming model in the naming of the save-temps files.

We already do this somewhat; e.g. for CUDA with save-temps, we'll output foo.s
and foo-sm_35.s.  Extending this to be more robust (e.g. including the triple)
seems fine.

> h) Use special options -target-offload=<triple> to specify offloading targets and delimit options meant for a toolchain.

I think I agree that we should generalize the flags we're using.

I'm not sold on the name or structure (I'm not aware of any other flags that
affect *all* flags following them?), but we can bikeshed about that separately.

> i) Use the offload kinds in the toolchain to drive the commands generation by Tools.

I'm not sure exactly what this means, but it doesn't sound
particularly contentious.  :)

> 3. We are willing to help with implementation of CUDA-specific parts when
> they overlap with the common infrastructure; though we expect that effort to
> be driven also by other contributors specifically interested in CUDA support
> that have the necessary know-how (both on CUDA itself and how it is supported
> in Clang / LLVM).

Given that this is work that doesn't really help CUDA (the driver works fine
for us as-is), I am not sure we'll be able to devote significant resources to
this project.  Of course we'll be available to assist with code relevant
reviews and give advice.

I think like any other change to clang, the responsibility will rest on the
authors not to break existing functionality, at the very least inasmuch as is
checked by existing unit tests.

Regards,
-Justin

On Thu, Mar 3, 2016 at 12:03 PM, Samuel F Antao via cfe-dev
<cfe-dev at lists.llvm.org> wrote:
> Hi Chris,
>
> I agree with Andrey when he says this should be a separate discussion.
>
> I think that aiming at having a library that would support any possible
> programming model would take a long time, as it requires a lot of consensus
> namely from who is maintaining programming models already in clang (e.g.
> CUDA). We should try to have something incremental.
>
> I'm happy to discuss and know more about the design and code you would like
> to contribute to this, but I think you should post it in a different thread.
>
> Thanks,
> Samuel
>
> 2016-03-03 11:20 GMT-05:00 C Bergström <cfe-dev at lists.llvm.org>:
>>
>> On Thu, Mar 3, 2016 at 10:19 PM, Ronan Keryell <ronan at keryell.fr> wrote:
>> >>>>>> On Thu, 3 Mar 2016 18:19:43 +0700, C Bergström via cfe-dev
>> >>>>>> <cfe-dev at lists.llvm.org> said:
>> >
>> >     C> On Thu, Mar 3, 2016 at 5:50 PM, Ronan KERYELL via cfe-dev
>> >     C> <cfe-dev at lists.llvm.org> wrote:
>> >
>> >     >> Just to be sure to understand: you are thinking about being able
>> >     >> to outline several "languages" at once, such as CUDA *and*
>> >     >> OpenMP, right ?
>> >     >>
>> >     >> I think it is required for serious applications. For example, in
>> >     >> the HPC world, it is common to have hybrid multi-node
>> >     >> heterogeneous applications that use MPI+OpenMP+OpenCL for
>> >     >> example. Since MPI and OpenCL are just libraries, there is only
>> >     >> OpenMP to off-load here. But if we move to OpenCL SYCL instead
>> >     >> with MPI+OpenMP+SYCL then both OpenMP and SYCL have to be managed
>> >     >> by the Clang off-loading infrastructure at the same time and be
>> >     >> sure they combine gracefully...
>> >     >>
>> >     >> I think your second proposal about (un)bundling can already
>> >     >> manage this.
>> >     >>
>> >     >> Otherwise, what about the code outlining itself used in the
>> >     >> off-loading process? The code generation itself requires to
>> >     >> outline the kernel code to some external functions to be compiled
>> >     >> by the kernel compiler. Do you think it is up to the programmer
>> >     >> to re-use the recipes used by OpenMP and CUDA for example or it
>> >     >> would be interesting to have a third proposal to abstract more
>> >     >> the outliner to be configurable to handle globally OpenMP, CUDA,
>> >     >> SYCL...?
>> >
>> >     C> Some very good points above and back to my broken record..
>> >
>> >     C> If all offloading is done in a single unified library -
>> >     C> a. Lowering in LLVM is greatly simplified since there's ***1***
>> >     C> offload API to be supported A region that's outlined for SYCL,
>> >     C> CUDA or something else is essentially the same thing. (I do
>> >     C> realize that some transformation may be highly target specific,
>> >     C> but to me that's more target hw driven than programming model
>> >     C> driven)
>> >
>> >     C> b. Mixing CUDA/OMP/ACC/Foo in theory may "just work" since the
>> >     C> same runtime will handle them all. (With the limitation that if
>> >     C> you want CUDA to *talk to* OMP or something else there needs to
>> >     C> be some glue.  I'm merely saying that 1 application with multiple
>> >     C> models in a way that won't conflict)
>> >
>> >     C> c. The driver doesn't need to figure out do I link against some
>> >     C> or a multitude of combining/conflicting libcuda, libomp,
>> >     C> libsomething - it's liboffload - done
>> >
>> > Yes, a unified target library would help.
>> >
>> >     C> The driver proposal and the liboffload proposal should imnsho be
>> >     C> tightly coupled and work together as *1*. The goals are
>> >     C> significantly overlapping and relevant. If you get the liboffload
>> >     C> OMP people to make that more agnostic - I think it simplifies the
>> >     C> driver work.
>> >
>> > So basically it is about introducing a fourth unification: liboffload.
>> >
>> > A great unification sounds great.
>> > My only concern is that if we tie everything together, it would increase
>> > the entry cost: all the different components should be ready in
>> > lock-step.
>> > If there is already a runtime available, it would be easier to start
>> > with and develop the other part in the meantime.
>> > So from a pragmatic agile point-of-view, I would prefer not to impose a
>> > strong unification.
>>
>> I think may not be explaining clearly - let me elaborate by example a bit
>> below
>>
>> > In the proposal of Samuel, all the parts seem independent.
>> >
>> >     C>   ------ More specific to this proposal - device
>> >     C> linker vs host linker. What do you do for IPA/LTO or whole
>> >     C> program optimizations? (Outside the scope of this project.. ?)
>> >
>> > Ouch. I did not think about it. It sounds like science-fiction for
>> > now. :-) Probably outside the scope of this project..
>>
>> It should certainly not be science fiction or an after-thought. I
>> won't go into shameless self promotion, but there are certainly useful
>> things you can do when you have a "whole device kernel" perspective.
>>
>> To digress into the liboffload component of this (sorry)
>> what we have today is basically liboffload/src/all source files mucked
>> together
>>
>> What I'm proposing would look more like this
>>
>> liboffload/src/common_middle_layer_glue # to start this may be "best
>> effort"
>> liboffload/src/omp # This code should exist today, but ideally should
>> build on top of the middle layer
>> liboffload/src/ptx # this may exist today - not sure
>> liboffload/src/amd_gpu # probably doesn't exist, but
>> wouldn't/shouldn't block anything
>> liboffload/src/phi # may exist in some form
>> liboffload/src/cuda # may exist in some form outside of the OMP work
>>
>> The end result would be liboffload.
>>
>> Above and below the common middle layer API are programming model or
>> hardware specific. To add a new hw backend you just implement the
>> things the middle layer needs. To add a new programming model you
>> build on top of the common layer. I'm not trying to force
>> anyone/everyone to switch to this now - I'm hoping that by being a
>> squeaky wheel this isolation of design and layers is there from the
>> start - even if not perfect. I think it's sloppy to not consider this
>> actually. LLVM's code generation is clean and has a nice separation
>> per target (for the most part) - why should the offload library have
>> bad design which just needs to be refactored later. I've seen others
>> in the community beat up Intel to force them to have higher quality
>> code before inclusion... some of this may actually be just minor
>> refactoring to come close to the target. (No pun intended)
>> -------------
>> If others become open to this design - I'm happy to contribute more
>> tangible details on the actual middle API.
>>
>> the objects which the driver has to deal with may and probably do
>> overlap to some extent with the objects the liboffload has to load or
>> deal with. Is there an API the driver can hook into to magically
>> handle that or is it all per-device and 1-off..
>> _______________________________________________
>> cfe-dev mailing list
>> cfe-dev at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>
>
>
> _______________________________________________
> cfe-dev mailing list
> cfe-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>



More information about the cfe-dev mailing list