<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: arial,helvetica,sans-serif; font-size: 10pt; color: #000000'><br><hr id="zwchr"><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>From: </b>"Jason Henline" <jhen@google.com><br><b>To: </b>"Hal Finkel" <hfinkel@anl.gov><br><b>Cc: </b>llvm-dev@lists.llvm.org<br><b>Sent: </b>Wednesday, March 9, 2016 7:16:01 PM<br><b>Subject: </b>Re: [llvm-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries<br><br><div dir="ltr">Thanks for your input, Hal.<div><br></div><div>I think that having support for running host-side tasks makes this library much more useful, not only for acceleratorless systems, but even for those with accelerators (especially if you can place dependency edges between the host tasks and the accelerator tasks).<br></div><div><br></div><div id="DWT15493">Based on your comments, I think that supporting host-side tasks sounds like something that should be added to our roadmap, and it should be pretty simple to do within the current model.</div></div></blockquote>Great!<br><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><div dir="ltr"><div id="DWT15495"> However, supporting dependency edges between different "platforms" (as StreamExecutor calls them) such as host and GPU could be slightly more challenging. The current model organizes each stream of execution as belonging to a parent platform, and streams are the structures that are meant to manage dependency edges. It will probably take some thought to decide how to do that in the right way.<br></div></div></blockquote>You might get a fair amount of millage just by allowing host tasks to be inserted into the stream of device tasks (I'm making assumptions here based on how CUDA streams work). Do you currently support inter-stream synchronization generally?<br><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><div dir="ltr"><div></div><div><br></div><div>Also, does your implementation support, or do you plan on supporting, CUDA-style unified memory between host and device?<br></div><div><br></div><div id="DWT15494">I'm not sure how much this had been considered before you mentioned it. It is not supported right now, but I think it would fit naturally into the design model. Currently a custom C++ type is used to represent device memory, and so we could add a sub-type to represent unified memory. In fact, a similar type of thing is already done for the host platform where memcpy operations between host and platform are converted to nops. I think this would be a pretty easy change in the current framework.<br></div></div></blockquote>Interesting. Definitely worth talking about (although probably on some other dedicated thread).<br><br>In case you can't already tell, I'm supportive of this kind of functionality in LLVM's ecosystem. I'm excited that this might be a near-term possibility (especially once you have the ability to execute host tasks). There might also be a relationship between this library and what we need to implement the upcoming C++17 parallel algorithms library.<br><br>Thanks again,<br>Hal<br><br><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><div dir="ltr"><div></div></div><br><div class="gmail_quote"><div dir="ltr">On Wed, Mar 9, 2016 at 4:42 PM Hal Finkel <<a href="mailto:hfinkel@anl.gov" target="_blank">hfinkel@anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt; color: rgb(0, 0, 0);"><br><hr><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>From: </b>"Jason Henline" <<a href="mailto:jhen@google.com" target="_blank">jhen@google.com</a>><br><b>To: </b>"Hal Finkel" <<a href="mailto:hfinkel@anl.gov" target="_blank">hfinkel@anl.gov</a>><br><b>Cc: </b><a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br><b>Sent: </b>Wednesday, March 9, 2016 5:04:53 PM<br><b>Subject: </b>Re: [llvm-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries</blockquote></div></div><div><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt; color: rgb(0, 0, 0);"><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><br><br><div dir="ltr"><div>Hi Hal,</div><div><br></div><div>Thanks for taking a look at the proposal.</div><div><br></div><div>The current version of StreamExecutor has partial support for a "host" platform which performs work on the CPU. It's interface is the same as the that of the CUDA platform discussed in the design documentation, but right now it does not support launching user-defined kernels, so it is very limited. The host platform does manage a thread pool internally and uses those threads to execute the "canned" StreamExecutor operations (BLAS, FFT, etc.), but that's all it can currently do. I think it would be relatively easy to extend the host platform to support launching of user-defined kernels, and then its functionality may overlap quite a bit with OpenMP, but we don't have any active plans to add that support at this time.</div></div></blockquote></div></div><div><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt; color: rgb(0, 0, 0);"><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"></blockquote>I think that having support for running host-side tasks makes this library much more useful, not only for acceleratorless systems, but even for those with accelerators (especially if you can place dependency edges between the host tasks and the accelerator tasks).<br><br>Also, does your implementation support, or do you plan on supporting, CUDA-style unified memory between host and device?<br><br>Thanks again,<br>Hal</div></div><div><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt; color: rgb(0, 0, 0);"><br><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><div dir="ltr"><div> However, this is something we may pursue in the long run because of the added flexibility it would provide for porting accelerator code to a device without an accelerator, etc.</div></div><br><div class="gmail_quote"><div dir="ltr">On Wed, Mar 9, 2016 at 2:31 PM Hal Finkel <<a href="mailto:hfinkel@anl.gov" target="_blank">hfinkel@anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt; color: rgb(0, 0, 0);"><br><hr><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>From: </b>"Jason Henline via llvm-dev" <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>><br><b>To: </b><a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br><b>Sent: </b>Wednesday, March 9, 2016 4:20:15 PM<br><b>Subject: </b>[llvm-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries</blockquote></div></div><div><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt; color: rgb(0, 0, 0);"><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><br><br><div dir="ltr"><div style="font-size: 13px; line-height: 19.5px;">At Google we're doing a lot of work on parallel programming models for CPUs, GPUs and other platforms. One place where we're investing a lot are parallel libraries, especially those closely tied to compiler technology like runtime and math libraries. We would like to develop these in the open, and the natural place seems to be as a subproject in LLVM if others in the community are interested.<br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Initially, we'd like to open source our StreamExecutor runtime library, which is used for simplifying the management of data-parallel workflows on accelerator devices and can also be extended to support other hardware platforms. We'd like to teach Clang to use StreamExecutor when targeting CUDA and work on other integrations, but that makes much more sense if it is part of the LLVM project.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">However, we think the LLVM subproject should be organized as a set of several libraries with StreamExecutor as just the first instance. As just one example of how creating a unified parallelism subproject could help with code sharing, the StreamExecutor library contains some nice wrappers around the CUDA driver API and OpenCL API that create a unified API for managing all kinds of GPU devices. This unified GPU wrapper would be broadly applicable for libraries that need to communicate with GPU devices.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Of course, there is already an LLVM subproject for a parallel runtime library: OpenMP! So there is a question of how it would fit into this picture. Eventually, it might make sense to pull in the OpenMP project as a library in this proposed new subproject. In particular, there is a good chance that OpenMP and StreamExecutor could share code for offloading to GPUs and managing workloads on those devices. This is discussed at the end of the StreamExecutor documentation below. However, if it turns out that the needs of OpenMP are too specialized to fit well in a generic parallelism project, then it may make sense to leave OpenMP as a separate LLVM subproject so it can focus on serving the particular needs of OpenMP.</div></div></blockquote></div></div><div><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt; color: rgb(0, 0, 0);"><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"></blockquote>The document starts by talking about work you're doing on "CPUs, GPUs and other platforms", but you've only really discussed accelerators here. I'm wondering if there is any overlap, either current or planned, with the functionality that host-side OpenMP provides. For example, is there some kind of host-side thread pool / task queue?<br><br>Thanks in advance,<br>Hal<br><br>P.S. I'm really happy that it looks like you have a sane API here for handling multi-GPU systems. Dealing with cudaSetDevice is a real pain.<br><br><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"></blockquote></div></div><div><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt; color: rgb(0, 0, 0);"><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><div dir="ltr"><div style="font-size: 13px; line-height: 19.5px;"></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Documentation for the StreamExecutor library that is being proposed for open-sourcing is included below to give a sense of what it is, in order to give context for how it might fit into a general parallelism LLVM subproject.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">What do folks think? Is there general interest in something like this? If so, we can start working on getting a project in place and sketching out a skeleton for how it would be organized, as well as contributing StreamExecutor to it. We're happy to iterate on the particulars to figure out what works for the community.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">=============================================</div><div style="font-size: 13px; line-height: 19.5px;">StreamExecutor Runtime Library Documentation</div><div style="font-size: 13px; line-height: 19.5px;">=============================================</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">What is StreamExecutor?</div><div style="font-size: 13px; line-height: 19.5px;">========================</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">**StreamExecutor** is a unified wrapper around the **CUDA** and **OpenCL** host-side programming models (runtimes). It lets host code target either CUDA or OpenCL devices with identically-functioning data-parallel kernels. StreamExecutor manages the execution of concurrent work targeting the accelerator similarly to how an Executor_ from the Google APIs client library manages the execution of concurrent work on the host.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">.. _Executor: <a href="http://google.github.io/google-api-cpp-client/latest/doxygen/classgoogleapis_1_1thread_1_1Executor.html" target="_blank">http://google.github.io/google-api-cpp-client/latest/doxygen/classgoogleapis_1_1thread_1_1Executor.html</a></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">StreamExecutor is currently used as the runtime for the vast majority of Google's internal GPGPU applications, and a snapshot of it is included in the open-source TensorFlow_ project, where it serves as the GPGPU runtime.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">.. _TensorFlow: <a href="https://www.tensorflow.org/" target="_blank">https://www.tensorflow.org</a></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">It is currently proposed that StreamExecutor itself be independently open-sourced. As part of that proposal, this document describes the basics of its design and explains why it would fit in well as an LLVM subproject.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">-------------------</div><div style="font-size: 13px; line-height: 19.5px;">Key points</div><div style="font-size: 13px; line-height: 19.5px;">-------------------</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">StreamExecutor:</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">* abstracts the underlying accelerator platform (avoids locking you into a single vendor, and lets you write code without thinking about which platform you'll be running on).</div><div style="font-size: 13px; line-height: 19.5px;">* provides an open-source alternative to the CUDA runtime library.</div><div style="font-size: 13px; line-height: 19.5px;">* gives users a stream management model whose terminology matches that of the CUDA programming model.</div><div style="font-size: 13px; line-height: 19.5px;">* makes use of modern C++ to create a safe, efficient, easy-to-use programming interface.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">StreamExecutor makes it easy to:</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">* move data between host and accelerator (and also between peer accelerators).</div><div style="font-size: 13px; line-height: 19.5px;">* execute data-parallel kernels written in the OpenCL or CUDA kernel languages.</div><div style="font-size: 13px; line-height: 19.5px;">* inspect the capabilities of a GPU-like device at runtime.</div><div style="font-size: 13px; line-height: 19.5px;">* manage multiple devices.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">--------------------------------</div><div style="font-size: 13px; line-height: 19.5px;">Example code snippet</div><div style="font-size: 13px; line-height: 19.5px;">--------------------------------</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">The StreamExecutor API uses abstractions that will be familiar to those who have worked with other GPU APIs: **Streams**, **Timers**, and **Kernels**. Its API is *fluent*, meaning that it allows the user to chain together a sequence of related operations on a stream, as in the following code snippet:</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">.. code-block:: c++</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> se::Stream stream(executor);</div><div style="font-size: 13px; line-height: 19.5px;"> se::Timer timer(executor);</div><div style="font-size: 13px; line-height: 19.5px;"> stream.InitWithTimer(&timer)</div><div style="font-size: 13px; line-height: 19.5px;"> .ThenStartTimer(&timer)</div><div style="font-size: 13px; line-height: 19.5px;"> .ThenLaunch(se::ThreadDim(dim_block_x, dim_block_y),</div><div style="font-size: 13px; line-height: 19.5px;"> se::BlockDim(dim_grid_x, dim_grid_y),</div><div style="font-size: 13px; line-height: 19.5px;"> my_kernel,</div><div style="font-size: 13px; line-height: 19.5px;"> arg0, arg1, arg2)</div><div style="font-size: 13px; line-height: 19.5px;"> .ThenStopTimer(&timer)</div><div style="font-size: 13px; line-height: 19.5px;"> .BlockHostUntilDone();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">The name of the kernel being launched in the snippet above is `my_kernel` and the arguments being passed to the kernel are `arg0`, `arg1`, and `arg2`. Kernels with any number of arguments of any types are supported, and the number and types of the arguments is checked at compile time.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">How does it work?</div><div style="font-size: 13px; line-height: 19.5px;">=======================</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">--------------------------------</div><div style="font-size: 13px; line-height: 19.5px;">Detailed example</div><div style="font-size: 13px; line-height: 19.5px;">--------------------------------</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">The following example shows how we can use StreamExecutor to create a `TypedKernel` instance, associate device code with that instance, and then use that instance to schedule work on an accelerator device.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">.. code-block:: c++</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> #include <cassert></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> #include "stream_executor.h"</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> namespace se = streamexecutor;</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // A PTX string defining a CUDA kernel.</div><div style="font-size: 13px; line-height: 19.5px;"> //</div><div style="font-size: 13px; line-height: 19.5px;"> // This PTX string represents a kernel that takes two arguments: an input value</div><div style="font-size: 13px; line-height: 19.5px;"> // and an output pointer. The input value is a floating point number. The output</div><div style="font-size: 13px; line-height: 19.5px;"> // value is a pointer to a floating point value in device memory. The output</div><div style="font-size: 13px; line-height: 19.5px;"> // pointer is where the output from the kernel will be written.</div><div style="font-size: 13px; line-height: 19.5px;"> //</div><div style="font-size: 13px; line-height: 19.5px;"> // The kernel adds a fixed floating point value to the input and writes the</div><div style="font-size: 13px; line-height: 19.5px;"> // result to the output location.</div><div style="font-size: 13px; line-height: 19.5px;"> static constexpr const char *KERNEL_PTX = R"(</div><div style="font-size: 13px; line-height: 19.5px;"> .version 3.1</div><div style="font-size: 13px; line-height: 19.5px;"> .target sm_20</div><div style="font-size: 13px; line-height: 19.5px;"> .address_size 64</div><div style="font-size: 13px; line-height: 19.5px;"> .visible .entry add_mystery_value(</div><div style="font-size: 13px; line-height: 19.5px;"> .param .f32 float_literal,</div><div style="font-size: 13px; line-height: 19.5px;"> .param .u64 result_loc</div><div style="font-size: 13px; line-height: 19.5px;"> ) {</div><div style="font-size: 13px; line-height: 19.5px;"> .reg .u64 %rl<2>;</div><div style="font-size: 13px; line-height: 19.5px;"> .reg .f32 %f<2>;</div><div style="font-size: 13px; line-height: 19.5px;"> ld.param.f32 %f1, [float_literal];</div><div style="font-size: 13px; line-height: 19.5px;"> ld.param.u64 %rl1, [result_loc];</div><div style="font-size: 13px; line-height: 19.5px;"> add.f32 %f1, %f1, 123.0;</div><div style="font-size: 13px; line-height: 19.5px;"> st.f32 [%rl1], %f1;</div><div style="font-size: 13px; line-height: 19.5px;"> ret;</div><div style="font-size: 13px; line-height: 19.5px;"> }</div><div style="font-size: 13px; line-height: 19.5px;"> )";</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // The number of arguments expected by the kernel described in</div><div style="font-size: 13px; line-height: 19.5px;"> // KERNEL_PTX_TEMPLATE.</div><div style="font-size: 13px; line-height: 19.5px;"> static constexpr int KERNEL_ARITY = 2;</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // The name of the kernel described in KERNEL_PTX.</div><div style="font-size: 13px; line-height: 19.5px;"> static constexpr const char *KERNEL_NAME = "add_mystery_value";</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // The value added to the input in the kernel described in KERNEL_PTX.</div><div style="font-size: 13px; line-height: 19.5px;"> static constexpr float MYSTERY_VALUE = 123.0f;</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> int main(int argc, char *argv[]) {</div><div style="font-size: 13px; line-height: 19.5px;"> // Get a CUDA Platform object. (Other platforms such as OpenCL are also</div><div style="font-size: 13px; line-height: 19.5px;"> // supported.)</div><div style="font-size: 13px; line-height: 19.5px;"> se::Platform *platform =</div><div style="font-size: 13px; line-height: 19.5px;"> se::MultiPlatformManager::PlatformWithName("cuda").ValueOrDie();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // Get a StreamExecutor for the chosen Platform. Multiple devices are</div><div style="font-size: 13px; line-height: 19.5px;"> // supported, we indicate here that we want to run on device 0.</div><div style="font-size: 13px; line-height: 19.5px;"> const int device_ordinal = 0;</div><div style="font-size: 13px; line-height: 19.5px;"> se::StreamExecutor *executor =</div><div style="font-size: 13px; line-height: 19.5px;"> platform->ExecutorForDevice(device_ordinal).ValueOrDie();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // Create a MultiKernelLoaderSpec, which knows where to find the code for our</div><div style="font-size: 13px; line-height: 19.5px;"> // kernel. In this case, the code is stored in memory as a PTX string.</div><div style="font-size: 13px; line-height: 19.5px;"> //</div><div style="font-size: 13px; line-height: 19.5px;"> // Note that the "arity" and name specified here must match "arity" and name</div><div style="font-size: 13px; line-height: 19.5px;"> // of the kernel defined in the PTX string.</div><div style="font-size: 13px; line-height: 19.5px;"> se::MultiKernelLoaderSpec kernel_loader_spec(KERNEL_ARITY);</div><div style="font-size: 13px; line-height: 19.5px;"> kernel_loader_spec.AddCudaPtxInMemory(KERNEL_PTX, KERNEL_NAME);</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // Next create a kernel handle, which we will associate with our kernel code</div><div style="font-size: 13px; line-height: 19.5px;"> // (i.e., the PTX string). The type of this handle is a bit verbose, so we</div><div style="font-size: 13px; line-height: 19.5px;"> // create an alias for it.</div><div style="font-size: 13px; line-height: 19.5px;"> //</div><div style="font-size: 13px; line-height: 19.5px;"> // This specific type represents a kernel that takes two arguments: a floating</div><div style="font-size: 13px; line-height: 19.5px;"> // point value and a pointer to a floating point value in device memory.</div><div style="font-size: 13px; line-height: 19.5px;"> //</div><div style="font-size: 13px; line-height: 19.5px;"> // A type like this is nice to have because it enables static type checking of</div><div style="font-size: 13px; line-height: 19.5px;"> // kernel arguments when we enqueue work on a stream.</div><div style="font-size: 13px; line-height: 19.5px;"> using KernelType = se::TypedKernel<float, se::DeviceMemory<float> *>;</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // Now instantiate an object of the specific kernel type we declared above.</div><div style="font-size: 13px; line-height: 19.5px;"> // The kernel object is not yet connected with the device code that we want it</div><div style="font-size: 13px; line-height: 19.5px;"> // to run (that happens with the call to GetKernel below), so it cannot be</div><div style="font-size: 13px; line-height: 19.5px;"> // used to execute work on the device yet.</div><div style="font-size: 13px; line-height: 19.5px;"> //</div><div style="font-size: 13px; line-height: 19.5px;"> // However, the kernel object is not completely empty when it is created. From</div><div style="font-size: 13px; line-height: 19.5px;"> // the StreamExecutor passed into its constructor it knows which platform it</div><div style="font-size: 13px; line-height: 19.5px;"> // is targeted for, and it also knows which device it will run on.</div><div style="font-size: 13px; line-height: 19.5px;"> KernelType kernel(executor);</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // Use the MultiKernelLoaderSpec defined above to load the kernel code onto</div><div style="font-size: 13px; line-height: 19.5px;"> // the device pointed to by the kernel object and to make that kernel object a</div><div style="font-size: 13px; line-height: 19.5px;"> // handle to the kernel code loaded on that device.</div><div style="font-size: 13px; line-height: 19.5px;"> //</div><div style="font-size: 13px; line-height: 19.5px;"> // The MultiKernelLoaderSpec may contain code for several different platforms,</div><div style="font-size: 13px; line-height: 19.5px;"> // but the kernel object has an associated platform, so there is no confusion</div><div style="font-size: 13px; line-height: 19.5px;"> // about which code should be loaded.</div><div style="font-size: 13px; line-height: 19.5px;"> //</div><div style="font-size: 13px; line-height: 19.5px;"> // After this call the kernel object can be used to launch its kernel on its</div><div style="font-size: 13px; line-height: 19.5px;"> // device.</div><div style="font-size: 13px; line-height: 19.5px;"> executor->GetKernel(kernel_loader_spec, &kernel);</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // Allocate memory in the device memory space to hold the result of the kernel</div><div style="font-size: 13px; line-height: 19.5px;"> // call. This memory will be freed when this object goes out of scope.</div><div style="font-size: 13px; line-height: 19.5px;"> se::ScopedDeviceMemory<float> result = executor->AllocateOwnedScalar<float>();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // Create a stream on which to schedule device operations.</div><div style="font-size: 13px; line-height: 19.5px;"> se::Stream stream(executor);</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // Schedule a kernel launch on the new stream and block until the kernel</div><div style="font-size: 13px; line-height: 19.5px;"> // completes. The kernel call executes asynchronously on the device, so we</div><div style="font-size: 13px; line-height: 19.5px;"> // could do more work on the host before calling BlockHostUntilDone.</div><div style="font-size: 13px; line-height: 19.5px;"> const float kernel_input_argument = 42.5f;</div><div style="font-size: 13px; line-height: 19.5px;"> stream.Init()</div><div style="font-size: 13px; line-height: 19.5px;"> .ThenLaunch(se::ThreadDim(), se::BlockDim(), kernel,</div><div style="font-size: 13px; line-height: 19.5px;"> kernel_input_argument, result.ptr())</div><div style="font-size: 13px; line-height: 19.5px;"> .BlockHostUntilDone();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // Copy the result of the kernel call from device back to the host.</div><div style="font-size: 13px; line-height: 19.5px;"> float host_result = 0.0f;</div><div style="font-size: 13px; line-height: 19.5px;"> executor->SynchronousMemcpyD2H(result.cref(), sizeof(host_result),</div><div style="font-size: 13px; line-height: 19.5px;"> &host_result);</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // Verify that the correct result was computed.</div><div style="font-size: 13px; line-height: 19.5px;"> assert((kernel_input_argument + MYSTERY_VALUE) == host_result);</div><div style="font-size: 13px; line-height: 19.5px;"> }</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">--------------------------------</div><div style="font-size: 13px; line-height: 19.5px;">Kernel Loader Specs</div><div style="font-size: 13px; line-height: 19.5px;">--------------------------------</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">An instance of the class `MultiKernelLoaderSpec` is used to encapsulate knowledge of where the device code for a kernel is stored and what format it is in. Given a `MultiKernelLoaderSpec` and an uninitialized `TypedKernel`, calling the `StreamExecutor::GetKernel` method will load the code onto the device and associate the `TypedKernel` instance with that loaded code. So, in order to initialize a `TypedKernel` instance, it is first necessary to create a `MultiKernelLoaderSpec`.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">A `MultiKernelLoaderSpec` supports a different method for adding device code</div><div style="font-size: 13px; line-height: 19.5px;">for each combination of platform, format, and storage location. The following</div><div style="font-size: 13px; line-height: 19.5px;">table shows some examples:</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">=========== ======= =========== =========================</div><div style="font-size: 13px; line-height: 19.5px;">Platform Format Location Setter</div><div style="font-size: 13px; line-height: 19.5px;">=========== ======= =========== =========================</div><div style="font-size: 13px; line-height: 19.5px;">CUDA PTX disk `AddCudaPtxOnDisk`</div><div style="font-size: 13px; line-height: 19.5px;">CUDA PTX memory `AddCudaPtxInMemory`</div><div style="font-size: 13px; line-height: 19.5px;">CUDA cubin disk `AddCudaCubinOnDisk`</div><div style="font-size: 13px; line-height: 19.5px;">CUDA cubin memory `AddCudaCubinInMemory`</div><div style="font-size: 13px; line-height: 19.5px;">OpenCL text disk `AddOpenCLTextOnDisk`</div><div style="font-size: 13px; line-height: 19.5px;">OpenCL text memory `AddOpenCLTextInMemory`</div><div style="font-size: 13px; line-height: 19.5px;">OpenCL binary disk `AddOpenCLBinaryOnDisk`</div><div style="font-size: 13px; line-height: 19.5px;">OpenCL binary memory `AddOpenCLBinaryInMemory`</div><div style="font-size: 13px; line-height: 19.5px;">=========== ======= =========== =========================</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">The specific method used in the example is `AddCudaPtxInMemory`, but all other methods are used similarly.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">------------------------------------</div><div style="font-size: 13px; line-height: 19.5px;">Compiler Support for StreamExecutor</div><div style="font-size: 13px; line-height: 19.5px;">------------------------------------</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">General strategies</div><div style="font-size: 13px; line-height: 19.5px;">-------------------</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">For illustrative purposes, the PTX code in the example is written by hand and appears as a string literal in the source code file, but it is far more typical for the kernel code to be expressed in a high level language like CUDA C++ or OpenCL C and for the device machine code to be generated by a compiler.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">There are several ways we can load compiled device code using StreamExecutor.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">One possibility is that the build system could write the compiled device code to a file on disk. This can then be added to a `MultiKernelLoaderSpec` by using one of the `OnDisk` setters.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Another option is to add a feature to the compiler which embeds the compiled device code into the host executable and provides some symbol (probably with a name based on the name of the kernel) that allows the host code to refer to the embedded code data.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">In fact, as discussed below, in the current use of StreamExecutor inside Google, the compiler goes even further and generates an instance of `MultiKernelLoaderSpec` for each kernel. This means the application author doesn't have to know anything about how or where the compiler decided to store the compiled device code, but instead gets a pre-made loader object that handles all those details.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Compiler-generated code makes things safe</div><div style="font-size: 13px; line-height: 19.5px;">--------------------------------------------</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Two of the steps in the example above are dangerous because they lack static safety checks: instantiating the `MultiKernelLoaderSpec` and specializing the `TypedKernel` class template. This section discusses how compiler support for StreamExecutor can make these steps safe.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Instantiating a `MultiKernelLoaderSpec` requires specifying a three things:</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">1. the kernel *arity* (number of parameters),</div><div style="font-size: 13px; line-height: 19.5px;">2. the kernel name,</div><div style="font-size: 13px; line-height: 19.5px;">3. a string containing the device machine code for the kernel (either as assembly, or some sort of object file).</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">The problem with this is that the kernel name and the number of parameters is already fully determined by the kernel's machine code. In the best case scenario the *arity* and name arguments passed to the `MultiKernelLoaderSpec` methods match the information in the machine code and are simply redundant, but in the worst case these arguments contradict the information in the machine code and we get a runtime error when we try to load the kernel..</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">The second unsafe operation is specifying the kernel parameter types as type arguments to the `TypedKernel` class template. The specified types must match the types defined in the kernel machine code, but again there is no compile-time checking that these types match. Failure to match these types will result in a runtime error when the kernel is launched.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">We would like the compiler to perform these checks for the application author, so as to eliminate this source of runtime errors. In particular, we want the compiler to create an appropriate `MultiKernelLoaderSpec` instance and `TypedKernel` specialization for each kernel definition.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">One of the main goals of open-sourcing StreamExecutor is to let us add this code generation capability to Clang, when the user has chosen to use StreamExecutor as their runtime for accelerator operations.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Google has been using an internally developed CUDA compiler based on Clang called **gpucc** that generates code for StreamExecutor in this way. The code below shows how the example above would be written using gpucc to generate the unsafe parts of the code.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">The kernel is defined in a high-level language (CUDA C++ in this example) in its own file:</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">.. code-block:: c++</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // File: <a href="http://add_mystery_value.cu/" target="_blank">add_mystery_value.cu</a></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> __global__ void add_mystery_value(float input, float *output) {</div><div style="font-size: 13px; line-height: 19.5px;"> *output = input + 42.0f;</div><div style="font-size: 13px; line-height: 19.5px;"> }</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> The host code is defined in another file:</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> .. code-block:: c++</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // File: example_host_code.cc</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> #include <cassert></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> #include "stream_executor.h"</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // This header is generated by the gpucc compiler and it contains the</div><div style="font-size: 13px; line-height: 19.5px;"> // definitions of gpucc::kernel::AddMysteryValue and</div><div style="font-size: 13px; line-height: 19.5px;"> // gpucc::spec::add_mystery_value().</div><div style="font-size: 13px; line-height: 19.5px;"> //</div><div style="font-size: 13px; line-height: 19.5px;"> // The name of this header file is derived from the name of the file containing</div><div style="font-size: 13px; line-height: 19.5px;"> // the kernel code. The trailing ".cu" is replaced with ".gpu.h".</div><div style="font-size: 13px; line-height: 19.5px;"> #include "add_mystery_value.gpu.h"</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> namespace se = streamexecutor;</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> int main(int argc, char *argv[]) {</div><div style="font-size: 13px; line-height: 19.5px;"> se::Platform *platform =</div><div style="font-size: 13px; line-height: 19.5px;"> se::MultiPlatformManager::PlatformWithName("cuda").ValueOrDie();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> const int device_ordinal = 0;</div><div style="font-size: 13px; line-height: 19.5px;"> se::StreamExecutor *executor =</div><div style="font-size: 13px; line-height: 19.5px;"> platform->ExecutorForDevice(device_ordinal).ValueOrDie();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // AddMysteryValue is an instance of TypedKernel generated by gpucc. The</div><div style="font-size: 13px; line-height: 19.5px;"> // template arguments are chosen by the compiler to match the parameters of</div><div style="font-size: 13px; line-height: 19.5px;"> // the add_mystery_value kernel.</div><div style="font-size: 13px; line-height: 19.5px;"> gpucc::kernel::AddMysteryValue kernel(executor);</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> // gpucc::spec::add_mystery_value() is generated by gpucc. It returns a</div><div style="font-size: 13px; line-height: 19.5px;"> // MultiKernelLoaderSpec that knows how to find the compiled code for the</div><div style="font-size: 13px; line-height: 19.5px;"> // add_mystery_value kernel.</div><div style="font-size: 13px; line-height: 19.5px;"> executor->GetKernel(gpucc::spec::add_mystery_value(), &kernel);</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> se::ScopedDeviceMemory<float> result = executor->AllocateOwnedScalar<float>();</div><div style="font-size: 13px; line-height: 19.5px;"> se::Stream stream(executor);</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> const float kernel_input_argument = 42.5f;</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> stream.Init()</div><div style="font-size: 13px; line-height: 19.5px;"> .ThenLaunch(se::ThreadDim(), se::BlockDim(), kernel,</div><div style="font-size: 13px; line-height: 19.5px;"> kernel_input_argument, result.ptr())</div><div style="font-size: 13px; line-height: 19.5px;"> .BlockHostUntilDone();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> float host_result = 0.0f;</div><div style="font-size: 13px; line-height: 19.5px;"> executor->SynchronousMemcpyD2H(result.cref(), sizeof(host_result),</div><div style="font-size: 13px; line-height: 19.5px;"> &host_result);</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> assert((kernel_input_argument + 42.0f) == host_result);</div><div style="font-size: 13px; line-height: 19.5px;"> }</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">This support from the compiler makes the use of StreamExecutor safe and easy.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Compiler support for triple angle bracket kernel launches</div><div style="font-size: 13px; line-height: 19.5px;">----------------------------------------------------------</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">For even greater ease of use, Google's gpucc CUDA compiler also supports an integrated mode that looks like NVIDIA's `CUDA programming model`_,which uses triple angle brackets (`<<<>>>`) to launch kernels.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">.. _CUDA programming model: <a href="http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#kernels" target="_blank">http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#kernels</a></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">.. code-block:: c++</div><div style="font-size: 13px; line-height: 19.5px;"> :emphasize-lines: 22</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> #include <cassert></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> #include "stream_executor.h"</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> namespace se = streamexecutor;</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> __global__ void add_mystery_value(float input, float *output) {</div><div style="font-size: 13px; line-height: 19.5px;"> *output = input + 42.0f;</div><div style="font-size: 13px; line-height: 19.5px;"> }</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> int main(int argc, char *argv[]) {</div><div style="font-size: 13px; line-height: 19.5px;"> se::Platform *platform =</div><div style="font-size: 13px; line-height: 19.5px;"> se::MultiPlatformManager::PlatformWithName("cuda").ValueOrDie();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> const int device_ordinal = 0;</div><div style="font-size: 13px; line-height: 19.5px;"> se::StreamExecutor *executor =</div><div style="font-size: 13px; line-height: 19.5px;"> platform->ExecutorForDevice(device_ordinal).ValueOrDie();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> se::ScopedDeviceMemory<float> result = executor->AllocateOwnedScalar<float>();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> const float kernel_input_argument = 42.5f;</div><div style="font-size: 13px; line-height: 19.5px;"> add_mystery_value<<<1, 1>>>(kernel_input_argument, *result.ptr());</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> float host_result = 0.0f;</div><div style="font-size: 13px; line-height: 19.5px;"> executor->SynchronousMemcpyD2H(result.cref(), sizeof(host_result),</div><div style="font-size: 13px; line-height: 19.5px;"> &host_result);</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> assert((kernel_input_argument + 42.0f) == host_result);</div><div style="font-size: 13px; line-height: 19.5px;"> }</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Under the hood, gpucc converts the triple angle bracket kernel call into a series of calls to the StreamExecutor library similar to the calls seen in the previous examples.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Clang currently supports the triple angle bracket kernel call syntax for CUDA compilation by replacing a triple angle bracket call with calls to the NVIDIA CUDA runtime library, but it would be easy to add a compiler flag to tell Clang to emit calls to the StreamExecutor library instead. There are several benefits to supporting this mode of compilation in Clang:</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">.. _benefits-of-streamexecutor:</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">* StreamExecutor is a high-level, modern C++ API, so is easier to use and less prone to error than the NVIDIA CUDA runtime and the OpenCL runtime.</div><div style="font-size: 13px; line-height: 19.5px;">* StreamExecutor will be open-source software, so GPU code will not have to depend on opaque binary blobs like the NVIDIA CUDA runtime library.</div><div style="font-size: 13px; line-height: 19.5px;">* Using StreamExecutor as the runtime would allow for easy extension of the triple angle bracket kernel launch syntax to support different accelerator programming models.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Supporting other platforms</div><div style="font-size: 13px; line-height: 19.5px;">===========================</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">StreamExecutor currently supports CUDA and OpenCL platforms out-of-the-box, but it uses a platform plugin architecture that makes it easy to add new platforms at any time. The CUDA and OpenCL platforms are both implemented as platform plugins in this way, so they serve as good examples for future platform developers of how to write these kinds of plugins.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Canned operations</div><div style="font-size: 13px; line-height: 19.5px;">==================</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">StreamExecutor provides several predefined kernels for common data-parallel operations. The supported classes of operations are:</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">* BLAS: basic linear algebra subprograms,</div><div style="font-size: 13px; line-height: 19.5px;">* DNN: deep neural networks,</div><div style="font-size: 13px; line-height: 19.5px;">* FFT: fast Fourier transforms, and</div><div style="font-size: 13px; line-height: 19.5px;">* RNG: random number generation.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Here is an example of using a canned operation to perform random number generation:</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">.. code-block:: c++</div><div style="font-size: 13px; line-height: 19.5px;"> :emphasize-lines: 12-13,17,34-35</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> #include <array></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> #include "cuda/cuda_rng.h"</div><div style="font-size: 13px; line-height: 19.5px;"> #include "stream_executor.h"</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> namespace se = streamexecutor;</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> int main(int argc, char *argv[]) {</div><div style="font-size: 13px; line-height: 19.5px;"> se::Platform *platform =</div><div style="font-size: 13px; line-height: 19.5px;"> se::MultiPlatformManager::PlatformWithName("cuda").ValueOrDie();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> se::PluginConfig plugin_config;</div><div style="font-size: 13px; line-height: 19.5px;"> plugin_config.SetRng(se::cuda::kCuRandPlugin);</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> const int device_ordinal = 0;</div><div style="font-size: 13px; line-height: 19.5px;"> se::StreamExecutor *executor =</div><div style="font-size: 13px; line-height: 19.5px;"> platform->ExecutorForDeviceWithPluginConfig(device_ordinal, plugin_config)</div><div style="font-size: 13px; line-height: 19.5px;"> .ValueOrDie();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> const uint8 seed[] = {0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7,</div><div style="font-size: 13px; line-height: 19.5px;"> 0x8, 0x9, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf};</div><div style="font-size: 13px; line-height: 19.5px;"> constexpr uint64 random_element_count = 1024;</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> using HostArray = std::array<float, random_element_count>;</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> HostArray host_memory;</div><div style="font-size: 13px; line-height: 19.5px;"> const size_t data_size = host_memory.size() * sizeof(HostArray::value_type);</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> se::ScopedDeviceMemory<float> device_memory =</div><div style="font-size: 13px; line-height: 19.5px;"> executor->AllocateOwnedArray<float>(random_element_count);</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> se::Stream stream(executor);</div><div style="font-size: 13px; line-height: 19.5px;"> stream.Init()</div><div style="font-size: 13px; line-height: 19.5px;"> .ThenSetRngSeed(seed, sizeof(seed))</div><div style="font-size: 13px; line-height: 19.5px;"> .ThenPopulateRandUniform(device_memory.ptr())</div><div style="font-size: 13px; line-height: 19.5px;"> .BlockHostUntilDone();</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"> executor->SynchronousMemcpyD2H(*device_memory.ptr(), data_size,</div><div style="font-size: 13px; line-height: 19.5px;"> host_memory.data());</div><div style="font-size: 13px; line-height: 19.5px;"> }</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Each platform plugin can define its own canned operation plugins for these operations or choose to leave any of them unimplemented.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Contrast with OpenMP</div><div style="font-size: 13px; line-height: 19.5px;">=====================</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Recent versions of OpenMP also provide a high-level, easy-to-use interface for running data-parallel workloads on an accelerator device. One big difference between OpenMP's approach and that of StreamExecutor is that OpenMP generates both the kernel code that runs on the device and the host-side code needed to launch the kernel, whereas StreamExecutor only generates the host-side code. While the OpenMP model provides the convenience of allowing the author to write their kernel code in standard C/C++, the StreamExecutor model allows for the use of any kernel language (e.g. CUDA C++ or OpenCL C). This lets authors use platform-specific features that are only present in platform-specific kernel definition languages.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">The philosophy of StreamExecutor is that performance is critical on the device, but less so on the host. As a result, no attempt is made to use a high-level device abstraction during device code generation. Instead, the high-level abstraction provided by StreamExecutor is used only for the host-side code that moves data and launches kernels. This host-side work is tedious and is not performance critical, so it benefits from being wrapped in a high-level library that can support a wide range of platforms in an easily extensible manner.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Cooperation with OpenMP</div><div style="font-size: 13px; line-height: 19.5px;">========================</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">The Clang OpenMP community is currently in the process of `designing their implementation`_ of offloading support. They will want the compiler to convert the various standardized target-oriented OpenMP pragmas into device code to execute on an accelerator and host code to load and run that device code. StreamExecutor may provide a convenient API for OpenMP to use to generate their host-side code.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">.. _designing their implementation: <a href="https://drive.google.com/a/google.com/file/d/0B-jX56_FbGKRM21sYlNYVnB4eFk/view" target="_blank">https://drive.google.com/a/google.com/file/d/0B-jX56_FbGKRM21sYlNYVnB4eFk/view</a></div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">In addition to the :ref:`benefits<benefits-of-streamexecutor>` that all users of StreamExecutor enjoy over the alternative host-side runtime libraries, OpenMP and StreamExecutor may mutually benefit by sharing work to support new platforms. If OpenMP makes use of StreamExecutor, then it should be simple for OpenMP to add support for any new platforms that StreamExecutor supports in the future. Similarly, for any platforms OpenMP would like to target, they may add that support in StreamExecutor and take advantage of the knowledge of platform support in the StreamExecutor community. The resulting new platform support would then be available not just within OpenMP, but also to any user of StreamExecutor.</div><div style="font-size: 13px; line-height: 19.5px;"><br></div><div style="font-size: 13px; line-height: 19.5px;">Although OpenMP and StreamExecutor support different programming models, some of the work they perform under the hood will likely be very similar. By sharing code and domain expertise, both projects will be improved and strengthened as their capabilities are expanded. The StreamExecutor community looks forward to much collaboration and discussion with OpenMP about the best places and ways to cooperate.</div><div><br></div></div>
<br></blockquote></div></div><div><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt; color: rgb(0, 0, 0);"><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px; color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;">_______________________________________________<br>LLVM Developers mailing list<br><a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br><a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br></blockquote></div></div><div><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt; color: rgb(0, 0, 0);"><br><br><br>-- <br><div><span></span>Hal Finkel<br>Assistant Computational Scientist<br>Leadership Computing Facility<br>Argonne National Laboratory<span></span><br></div></div></div></blockquote></div>
</blockquote><br><br><br>-- <br><div><span></span>Hal Finkel<br>Assistant Computational Scientist<br>Leadership Computing Facility<br>Argonne National Laboratory<span></span><br></div></div></div></blockquote></div>
</blockquote><br><br><br>-- <br><div><span name="x"></span>Hal Finkel<br>Assistant Computational Scientist<br>Leadership Computing Facility<br>Argonne National Laboratory<span name="x"></span><br></div></div></body></html>