<div dir="ltr">Our windows bot is unhappy: <div><a href="http://lab.llvm.org:8011/builders/sanitizer-windows/builds/25679/steps/run%20tests/logs/stdio">http://lab.llvm.org:8011/builders/sanitizer-windows/builds/25679/steps/run%20tests/logs/stdio</a><br><div><pre style="font-family:"courier new",courier,monotype,monospace;color:rgb(0,0,0);font-size:medium"><span class="gmail-stdout">C:\b\slave\sanitizer-windows\llvm\tools\clang\lib\Driver\Action.cpp(107) : error C2065: 'StringRef' : undeclared identifier
C:\b\slave\sanitizer-windows\llvm\tools\clang\lib\Driver\Action.cpp(107) : error C2146: syntax error : missing ')' before identifier 'NormalizedTriple'
C:\b\slave\sanitizer-windows\llvm\tools\clang\lib\Driver\Action.cpp(107) : error C2761: 'std::string clang::driver::Action::getOffloadingFileNamePrefix(llvm::StringRef) const' : member function redeclaration not allowed
C:\b\slave\sanitizer-windows\llvm\tools\clang\lib\Driver\Action.cpp(107) : error C2059: syntax error : ')'
C:\b\slave\sanitizer-windows\llvm\tools\clang\lib\Driver\Action.cpp(107) : error C2143: syntax error : missing ';' before '{'
C:\b\slave\sanitizer-windows\llvm\tools\clang\lib\Driver\Action.cpp(107) : error C2447: '{' : missing function header (old-style formal list?)</span></pre></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jul 15, 2016 at 4:13 PM, Samuel Antao via cfe-commits <span dir="ltr"><<a href="mailto:cfe-commits@lists.llvm.org" target="_blank">cfe-commits@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Author: sfantao<br>
Date: Fri Jul 15 18:13:27 2016<br>
New Revision: 275645<br>
<br>
URL: <a href="http://llvm.org/viewvc/llvm-project?rev=275645&view=rev" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project?rev=275645&view=rev</a><br>
Log:<br>
[CUDA][OpenMP] Create generic offload action<br>
<br>
Summary:<br>
This patch replaces the CUDA specific action by a generic offload action. The offload action may have multiple dependences classier in “host” and “device”. The way this generic offloading action is used is very similar to what is done today by the CUDA implementation: it is used to set a specific toolchain and architecture to its dependences during the generation of jobs.<br>
<br>
This patch also proposes propagating the offloading information through the action graph so that that information can be easily retrieved at any time during the generation of commands. This allows e.g. the "clang tool” to evaluate whether CUDA should be supported for the device or host and ptas to easily retrieve the target architecture.<br>
<br>
This is an example of how the action graphs would look like (compilation of a single CUDA file with two GPU architectures)<br>
```<br>
0: input, "<a href="http://cudatests.cu" rel="noreferrer" target="_blank">cudatests.cu</a>", cuda, (host-cuda)<br>
1: preprocessor, {0}, cuda-cpp-output, (host-cuda)<br>
2: compiler, {1}, ir, (host-cuda)<br>
3: input, "<a href="http://cudatests.cu" rel="noreferrer" target="_blank">cudatests.cu</a>", cuda, (device-cuda, sm_35)<br>
4: preprocessor, {3}, cuda-cpp-output, (device-cuda, sm_35)<br>
5: compiler, {4}, ir, (device-cuda, sm_35)<br>
6: backend, {5}, assembler, (device-cuda, sm_35)<br>
7: assembler, {6}, object, (device-cuda, sm_35)<br>
8: offload, "device-cuda (nvptx64-nvidia-cuda:sm_35)" {7}, object<br>
9: offload, "device-cuda (nvptx64-nvidia-cuda:sm_35)" {6}, assembler<br>
10: input, "<a href="http://cudatests.cu" rel="noreferrer" target="_blank">cudatests.cu</a>", cuda, (device-cuda, sm_37)<br>
11: preprocessor, {10}, cuda-cpp-output, (device-cuda, sm_37)<br>
12: compiler, {11}, ir, (device-cuda, sm_37)<br>
13: backend, {12}, assembler, (device-cuda, sm_37)<br>
14: assembler, {13}, object, (device-cuda, sm_37)<br>
15: offload, "device-cuda (nvptx64-nvidia-cuda:sm_37)" {14}, object<br>
16: offload, "device-cuda (nvptx64-nvidia-cuda:sm_37)" {13}, assembler<br>
17: linker, {8, 9, 15, 16}, cuda-fatbin, (device-cuda)<br>
18: offload, "host-cuda (powerpc64le-unknown-linux-gnu)" {2}, "device-cuda (nvptx64-nvidia-cuda)" {17}, ir<br>
19: backend, {18}, assembler<br>
20: assembler, {19}, object<br>
21: input, "cuda", object<br>
22: input, "cudart", object<br>
23: linker, {20, 21, 22}, image<br>
```<br>
The changes in this patch pass the existent regression tests (keeps the existent functionality) and resulting binaries execute correctly in a Power8+K40 machine.<br>
<br>
Reviewers: echristo, hfinkel, jlebar, ABataev, tra<br>
<br>
Subscribers: guansong, andreybokhanko, tcramer, mkuron, cfe-commits, arpith-jacob, carlo.bertolli, caomhin<br>
<br>
Differential Revision: <a href="https://reviews.llvm.org/D18171" rel="noreferrer" target="_blank">https://reviews.llvm.org/D18171</a><br>
<br>
Added:<br>
cfe/trunk/test/Driver/<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a><br>
Modified:<br>
cfe/trunk/include/clang/Driver/Action.h<br>
cfe/trunk/include/clang/Driver/Compilation.h<br>
cfe/trunk/include/clang/Driver/Driver.h<br>
cfe/trunk/lib/Driver/Action.cpp<br>
cfe/trunk/lib/Driver/Driver.cpp<br>
cfe/trunk/lib/Driver/ToolChain.cpp<br>
cfe/trunk/lib/Driver/Tools.cpp<br>
cfe/trunk/lib/Driver/Tools.h<br>
cfe/trunk/lib/Frontend/CreateInvocationFromCommandLine.cpp<br>
<br>
Modified: cfe/trunk/include/clang/Driver/Action.h<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/cfe/trunk/include/clang/Driver/Action.h?rev=275645&r1=275644&r2=275645&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/cfe/trunk/include/clang/Driver/Action.h?rev=275645&r1=275644&r2=275645&view=diff</a><br>
==============================================================================<br>
--- cfe/trunk/include/clang/Driver/Action.h (original)<br>
+++ cfe/trunk/include/clang/Driver/Action.h Fri Jul 15 18:13:27 2016<br>
@@ -13,6 +13,7 @@<br>
#include "clang/Basic/Cuda.h"<br>
#include "clang/Driver/Types.h"<br>
#include "clang/Driver/Util.h"<br>
+#include "llvm/ADT/STLExtras.h"<br>
#include "llvm/ADT/SmallVector.h"<br>
<br>
namespace llvm {<br>
@@ -27,6 +28,8 @@ namespace opt {<br>
namespace clang {<br>
namespace driver {<br>
<br>
+class ToolChain;<br>
+<br>
/// Action - Represent an abstract compilation step to perform.<br>
///<br>
/// An action represents an edge in the compilation graph; typically<br>
@@ -50,8 +53,7 @@ public:<br>
enum ActionClass {<br>
InputClass = 0,<br>
BindArchClass,<br>
- CudaDeviceClass,<br>
- CudaHostClass,<br>
+ OffloadClass,<br>
PreprocessJobClass,<br>
PrecompileJobClass,<br>
AnalyzeJobClass,<br>
@@ -65,17 +67,13 @@ public:<br>
VerifyDebugInfoJobClass,<br>
VerifyPCHJobClass,<br>
<br>
- JobClassFirst=PreprocessJobClass,<br>
- JobClassLast=VerifyPCHJobClass<br>
+ JobClassFirst = PreprocessJobClass,<br>
+ JobClassLast = VerifyPCHJobClass<br>
};<br>
<br>
// The offloading kind determines if this action is binded to a particular<br>
// programming model. Each entry reserves one bit. We also have a special kind<br>
// to designate the host offloading tool chain.<br>
- //<br>
- // FIXME: This is currently used to indicate that tool chains are used in a<br>
- // given programming, but will be used here as well once a generic offloading<br>
- // action is implemented.<br>
enum OffloadKind {<br>
OFK_None = 0x00,<br>
// The host offloading tool chain.<br>
@@ -95,6 +93,19 @@ private:<br>
ActionList Inputs;<br>
<br>
protected:<br>
+ ///<br>
+ /// Offload information.<br>
+ ///<br>
+<br>
+ /// The host offloading kind - a combination of kinds encoded in a mask.<br>
+ /// Multiple programming models may be supported simultaneously by the same<br>
+ /// host.<br>
+ unsigned ActiveOffloadKindMask = 0u;<br>
+ /// Offloading kind of the device.<br>
+ OffloadKind OffloadingDeviceKind = OFK_None;<br>
+ /// The Offloading architecture associated with this action.<br>
+ const char *OffloadingArch = nullptr;<br>
+<br>
Action(ActionClass Kind, types::ID Type) : Action(Kind, ActionList(), Type) {}<br>
Action(ActionClass Kind, Action *Input, types::ID Type)<br>
: Action(Kind, ActionList({Input}), Type) {}<br>
@@ -124,6 +135,40 @@ public:<br>
input_const_range inputs() const {<br>
return input_const_range(input_begin(), input_end());<br>
}<br>
+<br>
+ /// Return a string containing the offload kind of the action.<br>
+ std::string getOffloadingKindPrefix() const;<br>
+ /// Return a string that can be used as prefix in order to generate unique<br>
+ /// files for each offloading kind.<br>
+ std::string getOffloadingFileNamePrefix(StringRef NormalizedTriple) const;<br>
+<br>
+ /// Set the device offload info of this action and propagate it to its<br>
+ /// dependences.<br>
+ void propagateDeviceOffloadInfo(OffloadKind OKind, const char *OArch);<br>
+ /// Append the host offload info of this action and propagate it to its<br>
+ /// dependences.<br>
+ void propagateHostOffloadInfo(unsigned OKinds, const char *OArch);<br>
+ /// Set the offload info of this action to be the same as the provided action,<br>
+ /// and propagate it to its dependences.<br>
+ void propagateOffloadInfo(const Action *A);<br>
+<br>
+ unsigned getOffloadingHostActiveKinds() const {<br>
+ return ActiveOffloadKindMask;<br>
+ }<br>
+ OffloadKind getOffloadingDeviceKind() const { return OffloadingDeviceKind; }<br>
+ const char *getOffloadingArch() const { return OffloadingArch; }<br>
+<br>
+ /// Check if this action have any offload kinds. Note that host offload kinds<br>
+ /// are only set if the action is a dependence to a host offload action.<br>
+ bool isHostOffloading(OffloadKind OKind) const {<br>
+ return ActiveOffloadKindMask & OKind;<br>
+ }<br>
+ bool isDeviceOffloading(OffloadKind OKind) const {<br>
+ return OffloadingDeviceKind == OKind;<br>
+ }<br>
+ bool isOffloading(OffloadKind OKind) const {<br>
+ return isHostOffloading(OKind) || isDeviceOffloading(OKind);<br>
+ }<br>
};<br>
<br>
class InputAction : public Action {<br>
@@ -156,39 +201,126 @@ public:<br>
}<br>
};<br>
<br>
-class CudaDeviceAction : public Action {<br>
+/// An offload action combines host or/and device actions according to the<br>
+/// programming model implementation needs and propagates the offloading kind to<br>
+/// its dependences.<br>
+class OffloadAction final : public Action {<br>
virtual void anchor();<br>
<br>
- const CudaArch GpuArch;<br>
-<br>
- /// True when action results are not consumed by the host action (e.g when<br>
- /// -fsyntax-only or --cuda-device-only options are used).<br>
- bool AtTopLevel;<br>
-<br>
public:<br>
- CudaDeviceAction(Action *Input, CudaArch Arch, bool AtTopLevel);<br>
+ /// Type used to communicate device actions. It associates bound architecture,<br>
+ /// toolchain, and offload kind to each action.<br>
+ class DeviceDependences final {<br>
+ public:<br>
+ typedef SmallVector<const ToolChain *, 3> ToolChainList;<br>
+ typedef SmallVector<const char *, 3> BoundArchList;<br>
+ typedef SmallVector<OffloadKind, 3> OffloadKindList;<br>
+<br>
+ private:<br>
+ // Lists that keep the information for each dependency. All the lists are<br>
+ // meant to be updated in sync. We are adopting separate lists instead of a<br>
+ // list of structs, because that simplifies forwarding the actions list to<br>
+ // initialize the inputs of the base Action class.<br>
+<br>
+ /// The dependence actions.<br>
+ ActionList DeviceActions;<br>
+ /// The offloading toolchains that should be used with the action.<br>
+ ToolChainList DeviceToolChains;<br>
+ /// The architectures that should be used with this action.<br>
+ BoundArchList DeviceBoundArchs;<br>
+ /// The offload kind of each dependence.<br>
+ OffloadKindList DeviceOffloadKinds;<br>
+<br>
+ public:<br>
+ /// Add a action along with the associated toolchain, bound arch, and<br>
+ /// offload kind.<br>
+ void add(Action &A, const ToolChain &TC, const char *BoundArch,<br>
+ OffloadKind OKind);<br>
+<br>
+ /// Get each of the individual arrays.<br>
+ const ActionList &getActions() const { return DeviceActions; };<br>
+ const ToolChainList &getToolChains() const { return DeviceToolChains; };<br>
+ const BoundArchList &getBoundArchs() const { return DeviceBoundArchs; };<br>
+ const OffloadKindList &getOffloadKinds() const {<br>
+ return DeviceOffloadKinds;<br>
+ };<br>
+ };<br>
<br>
- /// Get the CUDA GPU architecture to which this Action corresponds. Returns<br>
- /// UNKNOWN if this Action corresponds to multiple architectures.<br>
- CudaArch getGpuArch() const { return GpuArch; }<br>
+ /// Type used to communicate host actions. It associates bound architecture,<br>
+ /// toolchain, and offload kinds to the host action.<br>
+ class HostDependence final {<br>
+ /// The dependence action.<br>
+ Action &HostAction;<br>
+ /// The offloading toolchain that should be used with the action.<br>
+ const ToolChain &HostToolChain;<br>
+ /// The architectures that should be used with this action.<br>
+ const char *HostBoundArch = nullptr;<br>
+ /// The offload kind of each dependence.<br>
+ unsigned HostOffloadKinds = 0u;<br>
+<br>
+ public:<br>
+ HostDependence(Action &A, const ToolChain &TC, const char *BoundArch,<br>
+ const unsigned OffloadKinds)<br>
+ : HostAction(A), HostToolChain(TC), HostBoundArch(BoundArch),<br>
+ HostOffloadKinds(OffloadKinds){};<br>
+ /// Constructor version that obtains the offload kinds from the device<br>
+ /// dependencies.<br>
+ HostDependence(Action &A, const ToolChain &TC, const char *BoundArch,<br>
+ const DeviceDependences &DDeps);<br>
+ Action *getAction() const { return &HostAction; };<br>
+ const ToolChain *getToolChain() const { return &HostToolChain; };<br>
+ const char *getBoundArch() const { return HostBoundArch; };<br>
+ unsigned getOffloadKinds() const { return HostOffloadKinds; };<br>
+ };<br>
<br>
- bool isAtTopLevel() const { return AtTopLevel; }<br>
+ typedef llvm::function_ref<void(Action *, const ToolChain *, const char *)><br>
+ OffloadActionWorkTy;<br>
<br>
- static bool classof(const Action *A) {<br>
- return A->getKind() == CudaDeviceClass;<br>
- }<br>
-};<br>
+private:<br>
+ /// The host offloading toolchain that should be used with the action.<br>
+ const ToolChain *HostTC = nullptr;<br>
<br>
-class CudaHostAction : public Action {<br>
- virtual void anchor();<br>
- ActionList DeviceActions;<br>
+ /// The tool chains associated with the list of actions.<br>
+ DeviceDependences::ToolChainList DevToolChains;<br>
<br>
public:<br>
- CudaHostAction(Action *Input, const ActionList &DeviceActions);<br>
-<br>
- const ActionList &getDeviceActions() const { return DeviceActions; }<br>
+ OffloadAction(const HostDependence &HDep);<br>
+ OffloadAction(const DeviceDependences &DDeps, types::ID Ty);<br>
+ OffloadAction(const HostDependence &HDep, const DeviceDependences &DDeps);<br>
+<br>
+ /// Execute the work specified in \a Work on the host dependence.<br>
+ void doOnHostDependence(const OffloadActionWorkTy &Work) const;<br>
+<br>
+ /// Execute the work specified in \a Work on each device dependence.<br>
+ void doOnEachDeviceDependence(const OffloadActionWorkTy &Work) const;<br>
+<br>
+ /// Execute the work specified in \a Work on each dependence.<br>
+ void doOnEachDependence(const OffloadActionWorkTy &Work) const;<br>
+<br>
+ /// Execute the work specified in \a Work on each host or device dependence if<br>
+ /// \a IsHostDependenceto is true or false, respectively.<br>
+ void doOnEachDependence(bool IsHostDependence,<br>
+ const OffloadActionWorkTy &Work) const;<br>
+<br>
+ /// Return true if the action has a host dependence.<br>
+ bool hasHostDependence() const;<br>
+<br>
+ /// Return the host dependence of this action. This function is only expected<br>
+ /// to be called if the host dependence exists.<br>
+ Action *getHostDependence() const;<br>
+<br>
+ /// Return true if the action has a single device dependence. If \a<br>
+ /// DoNotConsiderHostActions is set, ignore the host dependence, if any, while<br>
+ /// accounting for the number of dependences.<br>
+ bool hasSingleDeviceDependence(bool DoNotConsiderHostActions = false) const;<br>
+<br>
+ /// Return the single device dependence of this action. This function is only<br>
+ /// expected to be called if a single device dependence exists. If \a<br>
+ /// DoNotConsiderHostActions is set, a host dependence is allowed.<br>
+ Action *<br>
+ getSingleDeviceDependence(bool DoNotConsiderHostActions = false) const;<br>
<br>
- static bool classof(const Action *A) { return A->getKind() == CudaHostClass; }<br>
+ static bool classof(const Action *A) { return A->getKind() == OffloadClass; }<br>
};<br>
<br>
class JobAction : public Action {<br>
<br>
Modified: cfe/trunk/include/clang/Driver/Compilation.h<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/cfe/trunk/include/clang/Driver/Compilation.h?rev=275645&r1=275644&r2=275645&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/cfe/trunk/include/clang/Driver/Compilation.h?rev=275645&r1=275644&r2=275645&view=diff</a><br>
==============================================================================<br>
--- cfe/trunk/include/clang/Driver/Compilation.h (original)<br>
+++ cfe/trunk/include/clang/Driver/Compilation.h Fri Jul 15 18:13:27 2016<br>
@@ -98,12 +98,7 @@ public:<br>
const Driver &getDriver() const { return TheDriver; }<br>
<br>
const ToolChain &getDefaultToolChain() const { return DefaultToolChain; }<br>
- const ToolChain *getOffloadingHostToolChain() const {<br>
- auto It = OrderedOffloadingToolchains.find(Action::OFK_Host);<br>
- if (It != OrderedOffloadingToolchains.end())<br>
- return It->second;<br>
- return nullptr;<br>
- }<br>
+<br>
unsigned isOffloadingHostKind(Action::OffloadKind Kind) const {<br>
return ActiveOffloadMask & Kind;<br>
}<br>
@@ -121,8 +116,8 @@ public:<br>
return OrderedOffloadingToolchains.equal_range(Kind);<br>
}<br>
<br>
- // Return an offload toolchain of the provided kind. Only one is expected to<br>
- // exist.<br>
+ /// Return an offload toolchain of the provided kind. Only one is expected to<br>
+ /// exist.<br>
template <Action::OffloadKind Kind><br>
const ToolChain *getSingleOffloadToolChain() const {<br>
auto TCs = getOffloadToolChains<Kind>();<br>
<br>
Modified: cfe/trunk/include/clang/Driver/Driver.h<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/cfe/trunk/include/clang/Driver/Driver.h?rev=275645&r1=275644&r2=275645&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/cfe/trunk/include/clang/Driver/Driver.h?rev=275645&r1=275644&r2=275645&view=diff</a><br>
==============================================================================<br>
--- cfe/trunk/include/clang/Driver/Driver.h (original)<br>
+++ cfe/trunk/include/clang/Driver/Driver.h Fri Jul 15 18:13:27 2016<br>
@@ -394,12 +394,13 @@ public:<br>
/// BuildJobsForAction - Construct the jobs to perform for the action \p A and<br>
/// return an InputInfo for the result of running \p A. Will only construct<br>
/// jobs for a given (Action, ToolChain, BoundArch) tuple once.<br>
- InputInfo BuildJobsForAction(Compilation &C, const Action *A,<br>
- const ToolChain *TC, const char *BoundArch,<br>
- bool AtTopLevel, bool MultipleArchs,<br>
- const char *LinkingOutput,<br>
- std::map<std::pair<const Action *, std::string>,<br>
- InputInfo> &CachedResults) const;<br>
+ InputInfo<br>
+ BuildJobsForAction(Compilation &C, const Action *A, const ToolChain *TC,<br>
+ const char *BoundArch, bool AtTopLevel, bool MultipleArchs,<br>
+ const char *LinkingOutput,<br>
+ std::map<std::pair<const Action *, std::string>, InputInfo><br>
+ &CachedResults,<br>
+ bool BuildForOffloadDevice) const;<br>
<br>
/// Returns the default name for linked images (e.g., "a.out").<br>
const char *getDefaultImageName() const;<br>
@@ -415,12 +416,11 @@ public:<br>
/// \param BoundArch - The bound architecture.<br>
/// \param AtTopLevel - Whether this is a "top-level" action.<br>
/// \param MultipleArchs - Whether multiple -arch options were supplied.<br>
- const char *GetNamedOutputPath(Compilation &C,<br>
- const JobAction &JA,<br>
- const char *BaseInput,<br>
- const char *BoundArch,<br>
- bool AtTopLevel,<br>
- bool MultipleArchs) const;<br>
+ /// \param NormalizedTriple - The normalized triple of the relevant target.<br>
+ const char *GetNamedOutputPath(Compilation &C, const JobAction &JA,<br>
+ const char *BaseInput, const char *BoundArch,<br>
+ bool AtTopLevel, bool MultipleArchs,<br>
+ StringRef NormalizedTriple) const;<br>
<br>
/// GetTemporaryPath - Return the pathname of a temporary file to use<br>
/// as part of compilation; the file will have the given prefix and suffix.<br>
@@ -467,7 +467,8 @@ private:<br>
const char *BoundArch, bool AtTopLevel, bool MultipleArchs,<br>
const char *LinkingOutput,<br>
std::map<std::pair<const Action *, std::string>, InputInfo><br>
- &CachedResults) const;<br>
+ &CachedResults,<br>
+ bool BuildForOffloadDevice) const;<br>
<br>
public:<br>
/// GetReleaseVersion - Parse (([0-9]+)(.([0-9]+)(.([0-9]+)?))?)? and<br>
<br>
Modified: cfe/trunk/lib/Driver/Action.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Driver/Action.cpp?rev=275645&r1=275644&r2=275645&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Driver/Action.cpp?rev=275645&r1=275644&r2=275645&view=diff</a><br>
==============================================================================<br>
--- cfe/trunk/lib/Driver/Action.cpp (original)<br>
+++ cfe/trunk/lib/Driver/Action.cpp Fri Jul 15 18:13:27 2016<br>
@@ -8,6 +8,7 @@<br>
//===----------------------------------------------------------------------===//<br>
<br>
#include "clang/Driver/Action.h"<br>
+#include "clang/Driver/ToolChain.h"<br>
#include "llvm/ADT/StringSwitch.h"<br>
#include "llvm/Support/ErrorHandling.h"<br>
#include "llvm/Support/Regex.h"<br>
@@ -21,8 +22,8 @@ const char *Action::getClassName(ActionC<br>
switch (AC) {<br>
case InputClass: return "input";<br>
case BindArchClass: return "bind-arch";<br>
- case CudaDeviceClass: return "cuda-device";<br>
- case CudaHostClass: return "cuda-host";<br>
+ case OffloadClass:<br>
+ return "offload";<br>
case PreprocessJobClass: return "preprocessor";<br>
case PrecompileJobClass: return "precompiler";<br>
case AnalyzeJobClass: return "analyzer";<br>
@@ -40,6 +41,82 @@ const char *Action::getClassName(ActionC<br>
llvm_unreachable("invalid class");<br>
}<br>
<br>
+void Action::propagateDeviceOffloadInfo(OffloadKind OKind, const char *OArch) {<br>
+ // Offload action set its own kinds on their dependences.<br>
+ if (Kind == OffloadClass)<br>
+ return;<br>
+<br>
+ assert((OffloadingDeviceKind == OKind || OffloadingDeviceKind == OFK_None) &&<br>
+ "Setting device kind to a different device??");<br>
+ assert(!ActiveOffloadKindMask && "Setting a device kind in a host action??");<br>
+ OffloadingDeviceKind = OKind;<br>
+ OffloadingArch = OArch;<br>
+<br>
+ for (auto *A : Inputs)<br>
+ A->propagateDeviceOffloadInfo(OffloadingDeviceKind, OArch);<br>
+}<br>
+<br>
+void Action::propagateHostOffloadInfo(unsigned OKinds, const char *OArch) {<br>
+ // Offload action set its own kinds on their dependences.<br>
+ if (Kind == OffloadClass)<br>
+ return;<br>
+<br>
+ assert(OffloadingDeviceKind == OFK_None &&<br>
+ "Setting a host kind in a device action.");<br>
+ ActiveOffloadKindMask |= OKinds;<br>
+ OffloadingArch = OArch;<br>
+<br>
+ for (auto *A : Inputs)<br>
+ A->propagateHostOffloadInfo(ActiveOffloadKindMask, OArch);<br>
+}<br>
+<br>
+void Action::propagateOffloadInfo(const Action *A) {<br>
+ if (unsigned HK = A->getOffloadingHostActiveKinds())<br>
+ propagateHostOffloadInfo(HK, A->getOffloadingArch());<br>
+ else<br>
+ propagateDeviceOffloadInfo(A->getOffloadingDeviceKind(),<br>
+ A->getOffloadingArch());<br>
+}<br>
+<br>
+std::string Action::getOffloadingKindPrefix() const {<br>
+ switch (OffloadingDeviceKind) {<br>
+ case OFK_None:<br>
+ break;<br>
+ case OFK_Host:<br>
+ llvm_unreachable("Host kind is not an offloading device kind.");<br>
+ break;<br>
+ case OFK_Cuda:<br>
+ return "device-cuda";<br>
+<br>
+ // TODO: Add other programming models here.<br>
+ }<br>
+<br>
+ if (!ActiveOffloadKindMask)<br>
+ return "";<br>
+<br>
+ std::string Res("host");<br>
+ if (ActiveOffloadKindMask & OFK_Cuda)<br>
+ Res += "-cuda";<br>
+<br>
+ // TODO: Add other programming models here.<br>
+<br>
+ return Res;<br>
+}<br>
+<br>
+std::string<br>
+Action::getOffloadingFileNamePrefix(StringRef NormalizedTriple) const {<br>
+ // A file prefix is only generated for device actions and consists of the<br>
+ // offload kind and triple.<br>
+ if (!OffloadingDeviceKind)<br>
+ return "";<br>
+<br>
+ std::string Res("-");<br>
+ Res += getOffloadingKindPrefix();<br>
+ Res += "-";<br>
+ Res += NormalizedTriple;<br>
+ return Res;<br>
+}<br>
+<br>
void InputAction::anchor() {}<br>
<br>
InputAction::InputAction(const Arg &_Input, types::ID _Type)<br>
@@ -51,16 +128,138 @@ void BindArchAction::anchor() {}<br>
BindArchAction::BindArchAction(Action *Input, const char *_ArchName)<br>
: Action(BindArchClass, Input), ArchName(_ArchName) {}<br>
<br>
-void CudaDeviceAction::anchor() {}<br>
+void OffloadAction::anchor() {}<br>
+<br>
+OffloadAction::OffloadAction(const HostDependence &HDep)<br>
+ : Action(OffloadClass, HDep.getAction()), HostTC(HDep.getToolChain()) {<br>
+ OffloadingArch = HDep.getBoundArch();<br>
+ ActiveOffloadKindMask = HDep.getOffloadKinds();<br>
+ HDep.getAction()->propagateHostOffloadInfo(HDep.getOffloadKinds(),<br>
+ HDep.getBoundArch());<br>
+};<br>
+<br>
+OffloadAction::OffloadAction(const DeviceDependences &DDeps, types::ID Ty)<br>
+ : Action(OffloadClass, DDeps.getActions(), Ty),<br>
+ DevToolChains(DDeps.getToolChains()) {<br>
+ auto &OKinds = DDeps.getOffloadKinds();<br>
+ auto &BArchs = DDeps.getBoundArchs();<br>
+<br>
+ // If all inputs agree on the same kind, use it also for this action.<br>
+ if (llvm::all_of(OKinds, [&](OffloadKind K) { return K == OKinds.front(); }))<br>
+ OffloadingDeviceKind = OKinds.front();<br>
+<br>
+ // If we have a single dependency, inherit the architecture from it.<br>
+ if (OKinds.size() == 1)<br>
+ OffloadingArch = BArchs.front();<br>
+<br>
+ // Propagate info to the dependencies.<br>
+ for (unsigned i = 0, e = getInputs().size(); i != e; ++i)<br>
+ getInputs()[i]->propagateDeviceOffloadInfo(OKinds[i], BArchs[i]);<br>
+}<br>
+<br>
+OffloadAction::OffloadAction(const HostDependence &HDep,<br>
+ const DeviceDependences &DDeps)<br>
+ : Action(OffloadClass, HDep.getAction()), HostTC(HDep.getToolChain()),<br>
+ DevToolChains(DDeps.getToolChains()) {<br>
+ // We use the kinds of the host dependence for this action.<br>
+ OffloadingArch = HDep.getBoundArch();<br>
+ ActiveOffloadKindMask = HDep.getOffloadKinds();<br>
+ HDep.getAction()->propagateHostOffloadInfo(HDep.getOffloadKinds(),<br>
+ HDep.getBoundArch());<br>
+<br>
+ // Add device inputs and propagate info to the device actions. Do work only if<br>
+ // we have dependencies.<br>
+ for (unsigned i = 0, e = DDeps.getActions().size(); i != e; ++i)<br>
+ if (auto *A = DDeps.getActions()[i]) {<br>
+ getInputs().push_back(A);<br>
+ A->propagateDeviceOffloadInfo(DDeps.getOffloadKinds()[i],<br>
+ DDeps.getBoundArchs()[i]);<br>
+ }<br>
+}<br>
+<br>
+void OffloadAction::doOnHostDependence(const OffloadActionWorkTy &Work) const {<br>
+ if (!HostTC)<br>
+ return;<br>
+ assert(!getInputs().empty() && "No dependencies for offload action??");<br>
+ auto *A = getInputs().front();<br>
+ Work(A, HostTC, A->getOffloadingArch());<br>
+}<br>
<br>
-CudaDeviceAction::CudaDeviceAction(Action *Input, clang::CudaArch Arch,<br>
- bool AtTopLevel)<br>
- : Action(CudaDeviceClass, Input), GpuArch(Arch), AtTopLevel(AtTopLevel) {}<br>
+void OffloadAction::doOnEachDeviceDependence(<br>
+ const OffloadActionWorkTy &Work) const {<br>
+ auto I = getInputs().begin();<br>
+ auto E = getInputs().end();<br>
+ if (I == E)<br>
+ return;<br>
+<br>
+ // We expect to have the same number of input dependences and device tool<br>
+ // chains, except if we also have a host dependence. In that case we have one<br>
+ // more dependence than we have device tool chains.<br>
+ assert(getInputs().size() == DevToolChains.size() + (HostTC ? 1 : 0) &&<br>
+ "Sizes of action dependences and toolchains are not consistent!");<br>
+<br>
+ // Skip host action<br>
+ if (HostTC)<br>
+ ++I;<br>
+<br>
+ auto TI = DevToolChains.begin();<br>
+ for (; I != E; ++I, ++TI)<br>
+ Work(*I, *TI, (*I)->getOffloadingArch());<br>
+}<br>
+<br>
+void OffloadAction::doOnEachDependence(const OffloadActionWorkTy &Work) const {<br>
+ doOnHostDependence(Work);<br>
+ doOnEachDeviceDependence(Work);<br>
+}<br>
+<br>
+void OffloadAction::doOnEachDependence(bool IsHostDependence,<br>
+ const OffloadActionWorkTy &Work) const {<br>
+ if (IsHostDependence)<br>
+ doOnHostDependence(Work);<br>
+ else<br>
+ doOnEachDeviceDependence(Work);<br>
+}<br>
<br>
-void CudaHostAction::anchor() {}<br>
+bool OffloadAction::hasHostDependence() const { return HostTC != nullptr; }<br>
<br>
-CudaHostAction::CudaHostAction(Action *Input, const ActionList &DeviceActions)<br>
- : Action(CudaHostClass, Input), DeviceActions(DeviceActions) {}<br>
+Action *OffloadAction::getHostDependence() const {<br>
+ assert(hasHostDependence() && "Host dependence does not exist!");<br>
+ assert(!getInputs().empty() && "No dependencies for offload action??");<br>
+ return HostTC ? getInputs().front() : nullptr;<br>
+}<br>
+<br>
+bool OffloadAction::hasSingleDeviceDependence(<br>
+ bool DoNotConsiderHostActions) const {<br>
+ if (DoNotConsiderHostActions)<br>
+ return getInputs().size() == (HostTC ? 2 : 1);<br>
+ return !HostTC && getInputs().size() == 1;<br>
+}<br>
+<br>
+Action *<br>
+OffloadAction::getSingleDeviceDependence(bool DoNotConsiderHostActions) const {<br>
+ assert(hasSingleDeviceDependence(DoNotConsiderHostActions) &&<br>
+ "Single device dependence does not exist!");<br>
+ // The previous assert ensures the number of entries in getInputs() is<br>
+ // consistent with what we are doing here.<br>
+ return HostTC ? getInputs()[1] : getInputs().front();<br>
+}<br>
+<br>
+void OffloadAction::DeviceDependences::add(Action &A, const ToolChain &TC,<br>
+ const char *BoundArch,<br>
+ OffloadKind OKind) {<br>
+ DeviceActions.push_back(&A);<br>
+ DeviceToolChains.push_back(&TC);<br>
+ DeviceBoundArchs.push_back(BoundArch);<br>
+ DeviceOffloadKinds.push_back(OKind);<br>
+}<br>
+<br>
+OffloadAction::HostDependence::HostDependence(Action &A, const ToolChain &TC,<br>
+ const char *BoundArch,<br>
+ const DeviceDependences &DDeps)<br>
+ : HostAction(A), HostToolChain(TC), HostBoundArch(BoundArch) {<br>
+ for (auto K : DDeps.getOffloadKinds())<br>
+ HostOffloadKinds |= K;<br>
+}<br>
<br>
void JobAction::anchor() {}<br>
<br>
<br>
Modified: cfe/trunk/lib/Driver/Driver.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Driver/Driver.cpp?rev=275645&r1=275644&r2=275645&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Driver/Driver.cpp?rev=275645&r1=275644&r2=275645&view=diff</a><br>
==============================================================================<br>
--- cfe/trunk/lib/Driver/Driver.cpp (original)<br>
+++ cfe/trunk/lib/Driver/Driver.cpp Fri Jul 15 18:13:27 2016<br>
@@ -435,7 +435,9 @@ void Driver::CreateOffloadingDeviceToolC<br>
})) {<br>
const ToolChain &TC = getToolChain(<br>
C.getInputArgs(),<br>
- llvm::Triple(C.getOffloadingHostToolChain()->getTriple().isArch64Bit()<br>
+ llvm::Triple(C.getSingleOffloadToolChain<Action::OFK_Host>()<br>
+ ->getTriple()<br>
+ .isArch64Bit()<br>
? "nvptx64-nvidia-cuda"<br>
: "nvptx-nvidia-cuda"));<br>
C.addOffloadDeviceToolChain(&TC, Action::OFK_Cuda);<br>
@@ -1022,19 +1024,33 @@ static unsigned PrintActions1(const Comp<br>
} else if (BindArchAction *BIA = dyn_cast<BindArchAction>(A)) {<br>
os << '"' << BIA->getArchName() << '"' << ", {"<br>
<< PrintActions1(C, *BIA->input_begin(), Ids) << "}";<br>
- } else if (CudaDeviceAction *CDA = dyn_cast<CudaDeviceAction>(A)) {<br>
- CudaArch Arch = CDA->getGpuArch();<br>
- if (Arch != CudaArch::UNKNOWN)<br>
- os << "'" << CudaArchToString(Arch) << "', ";<br>
- os << "{" << PrintActions1(C, *CDA->input_begin(), Ids) << "}";<br>
+ } else if (OffloadAction *OA = dyn_cast<OffloadAction>(A)) {<br>
+ bool IsFirst = true;<br>
+ OA->doOnEachDependence(<br>
+ [&](Action *A, const ToolChain *TC, const char *BoundArch) {<br>
+ // E.g. for two CUDA device dependences whose bound arch is sm_20 and<br>
+ // sm_35 this will generate:<br>
+ // "cuda-device" (nvptx64-nvidia-cuda:sm_20) {#ID}, "cuda-device"<br>
+ // (nvptx64-nvidia-cuda:sm_35) {#ID}<br>
+ if (!IsFirst)<br>
+ os << ", ";<br>
+ os << '"';<br>
+ if (TC)<br>
+ os << A->getOffloadingKindPrefix();<br>
+ else<br>
+ os << "host";<br>
+ os << " (";<br>
+ os << TC->getTriple().normalize();<br>
+<br>
+ if (BoundArch)<br>
+ os << ":" << BoundArch;<br>
+ os << ")";<br>
+ os << '"';<br>
+ os << " {" << PrintActions1(C, A, Ids) << "}";<br>
+ IsFirst = false;<br>
+ });<br>
} else {<br>
- const ActionList *AL;<br>
- if (CudaHostAction *CHA = dyn_cast<CudaHostAction>(A)) {<br>
- os << "{" << PrintActions1(C, *CHA->input_begin(), Ids) << "}"<br>
- << ", gpu binaries ";<br>
- AL = &CHA->getDeviceActions();<br>
- } else<br>
- AL = &A->getInputs();<br>
+ const ActionList *AL = &A->getInputs();<br>
<br>
if (AL->size()) {<br>
const char *Prefix = "{";<br>
@@ -1047,10 +1063,24 @@ static unsigned PrintActions1(const Comp<br>
os << "{}";<br>
}<br>
<br>
+ // Append offload info for all options other than the offloading action<br>
+ // itself (e.g. (cuda-device, sm_20) or (cuda-host)).<br>
+ std::string offload_str;<br>
+ llvm::raw_string_ostream offload_os(offload_str);<br>
+ if (!isa<OffloadAction>(A)) {<br>
+ auto S = A->getOffloadingKindPrefix();<br>
+ if (!S.empty()) {<br>
+ offload_os << ", (" << S;<br>
+ if (A->getOffloadingArch())<br>
+ offload_os << ", " << A->getOffloadingArch();<br>
+ offload_os << ")";<br>
+ }<br>
+ }<br>
+<br>
unsigned Id = Ids.size();<br>
Ids[A] = Id;<br>
llvm::errs() << Id << ": " << os.str() << ", "<br>
- << types::getTypeName(A->getType()) << "\n";<br>
+ << types::getTypeName(A->getType()) << offload_os.str() << "\n";<br>
<br>
return Id;<br>
}<br>
@@ -1378,8 +1408,12 @@ static Action *buildCudaActions(Compilat<br>
PartialCompilationArg &&<br>
PartialCompilationArg->getOption().matches(options::OPT_cuda_device_only);<br>
<br>
- if (CompileHostOnly)<br>
- return C.MakeAction<CudaHostAction>(HostAction, ActionList());<br>
+ if (CompileHostOnly) {<br>
+ OffloadAction::HostDependence HDep(<br>
+ *HostAction, *C.getSingleOffloadToolChain<Action::OFK_Host>(),<br>
+ /*BoundArch=*/nullptr, Action::OFK_Cuda);<br>
+ return C.MakeAction<OffloadAction>(HDep);<br>
+ }<br>
<br>
// Collect all cuda_gpu_arch parameters, removing duplicates.<br>
SmallVector<CudaArch, 4> GpuArchList;<br>
@@ -1408,8 +1442,6 @@ static Action *buildCudaActions(Compilat<br>
CudaDeviceInputs.push_back(std::make_pair(types::TY_CUDA_DEVICE, InputArg));<br>
<br>
// Build actions for all device inputs.<br>
- assert(C.getSingleOffloadToolChain<Action::OFK_Cuda>() &&<br>
- "Missing toolchain for device-side compilation.");<br>
ActionList CudaDeviceActions;<br>
C.getDriver().BuildActions(C, Args, CudaDeviceInputs, CudaDeviceActions);<br>
assert(GpuArchList.size() == CudaDeviceActions.size() &&<br>
@@ -1421,6 +1453,8 @@ static Action *buildCudaActions(Compilat<br>
return a->getKind() != Action::AssembleJobClass;<br>
});<br>
<br>
+ const ToolChain *CudaTC = C.getSingleOffloadToolChain<Action::OFK_Cuda>();<br>
+<br>
// Figure out what to do with device actions -- pass them as inputs to the<br>
// host action or run each of them independently.<br>
if (PartialCompilation || CompileDeviceOnly) {<br>
@@ -1436,10 +1470,13 @@ static Action *buildCudaActions(Compilat<br>
return nullptr;<br>
}<br>
<br>
- for (unsigned I = 0, E = GpuArchList.size(); I != E; ++I)<br>
- Actions.push_back(C.MakeAction<CudaDeviceAction>(CudaDeviceActions[I],<br>
- GpuArchList[I],<br>
- /* AtTopLevel */ true));<br>
+ for (unsigned I = 0, E = GpuArchList.size(); I != E; ++I) {<br>
+ OffloadAction::DeviceDependences DDep;<br>
+ DDep.add(*CudaDeviceActions[I], *CudaTC, CudaArchToString(GpuArchList[I]),<br>
+ Action::OFK_Cuda);<br>
+ Actions.push_back(<br>
+ C.MakeAction<OffloadAction>(DDep, CudaDeviceActions[I]->getType()));<br>
+ }<br>
// Kill host action in case of device-only compilation.<br>
if (CompileDeviceOnly)<br>
return nullptr;<br>
@@ -1459,19 +1496,23 @@ static Action *buildCudaActions(Compilat<br>
Action* BackendAction = AssembleAction->getInputs()[0];<br>
assert(BackendAction->getType() == types::TY_PP_Asm);<br>
<br>
- for (const auto& A : {AssembleAction, BackendAction}) {<br>
- DeviceActions.push_back(C.MakeAction<CudaDeviceAction>(<br>
- A, GpuArchList[I], /* AtTopLevel */ false));<br>
+ for (auto &A : {AssembleAction, BackendAction}) {<br>
+ OffloadAction::DeviceDependences DDep;<br>
+ DDep.add(*A, *CudaTC, CudaArchToString(GpuArchList[I]), Action::OFK_Cuda);<br>
+ DeviceActions.push_back(C.MakeAction<OffloadAction>(DDep, A->getType()));<br>
}<br>
}<br>
- auto FatbinAction = C.MakeAction<CudaDeviceAction>(<br>
- C.MakeAction<LinkJobAction>(DeviceActions, types::TY_CUDA_FATBIN),<br>
- CudaArch::UNKNOWN,<br>
- /* AtTopLevel = */ false);<br>
+ auto FatbinAction =<br>
+ C.MakeAction<LinkJobAction>(DeviceActions, types::TY_CUDA_FATBIN);<br>
+<br>
// Return a new host action that incorporates original host action and all<br>
// device actions.<br>
- return C.MakeAction<CudaHostAction>(std::move(HostAction),<br>
- ActionList({FatbinAction}));<br>
+ OffloadAction::HostDependence HDep(<br>
+ *HostAction, *C.getSingleOffloadToolChain<Action::OFK_Host>(),<br>
+ /*BoundArch=*/nullptr, Action::OFK_Cuda);<br>
+ OffloadAction::DeviceDependences DDep;<br>
+ DDep.add(*FatbinAction, *CudaTC, /*BoundArch=*/nullptr, Action::OFK_Cuda);<br>
+ return C.MakeAction<OffloadAction>(HDep, DDep);<br>
}<br>
<br>
void Driver::BuildActions(Compilation &C, DerivedArgList &Args,<br>
@@ -1580,6 +1621,9 @@ void Driver::BuildActions(Compilation &C<br>
YcArg = YuArg = nullptr;<br>
}<br>
<br>
+ // Track the host offload kinds used on this compilation.<br>
+ unsigned CompilationActiveOffloadHostKinds = 0u;<br>
+<br>
// Construct the actions to perform.<br>
ActionList LinkerInputs;<br>
<br>
@@ -1648,6 +1692,9 @@ void Driver::BuildActions(Compilation &C<br>
? phases::Compile<br>
: FinalPhase;<br>
<br>
+ // Track the host offload kinds used on this input.<br>
+ unsigned InputActiveOffloadHostKinds = 0u;<br>
+<br>
// Build the pipeline for this file.<br>
Action *Current = C.MakeAction<InputAction>(*InputArg, InputType);<br>
for (SmallVectorImpl<phases::ID>::iterator i = PL.begin(), e = PL.end();<br>
@@ -1679,21 +1726,36 @@ void Driver::BuildActions(Compilation &C<br>
Current = buildCudaActions(C, Args, InputArg, Current, Actions);<br>
if (!Current)<br>
break;<br>
+<br>
+ // We produced a CUDA action for this input, so the host has to support<br>
+ // CUDA.<br>
+ InputActiveOffloadHostKinds |= Action::OFK_Cuda;<br>
+ CompilationActiveOffloadHostKinds |= Action::OFK_Cuda;<br>
}<br>
<br>
if (Current->getType() == types::TY_Nothing)<br>
break;<br>
}<br>
<br>
- // If we ended with something, add to the output list.<br>
- if (Current)<br>
+ // If we ended with something, add to the output list. Also, propagate the<br>
+ // offload information to the top-level host action related with the current<br>
+ // input.<br>
+ if (Current) {<br>
+ if (InputActiveOffloadHostKinds)<br>
+ Current->propagateHostOffloadInfo(InputActiveOffloadHostKinds,<br>
+ /*BoundArch=*/nullptr);<br>
Actions.push_back(Current);<br>
+ }<br>
}<br>
<br>
- // Add a link action if necessary.<br>
- if (!LinkerInputs.empty())<br>
+ // Add a link action if necessary and propagate the offload information for<br>
+ // the current compilation.<br>
+ if (!LinkerInputs.empty()) {<br>
Actions.push_back(<br>
C.MakeAction<LinkJobAction>(LinkerInputs, types::TY_Image));<br>
+ Actions.back()->propagateHostOffloadInfo(CompilationActiveOffloadHostKinds,<br>
+ /*BoundArch=*/nullptr);<br>
+ }<br>
<br>
// If we are linking, claim any options which are obviously only used for<br>
// compilation.<br>
@@ -1829,7 +1891,8 @@ void Driver::BuildJobs(Compilation &C) c<br>
/*BoundArch*/ nullptr,<br>
/*AtTopLevel*/ true,<br>
/*MultipleArchs*/ ArchNames.size() > 1,<br>
- /*LinkingOutput*/ LinkingOutput, CachedResults);<br>
+ /*LinkingOutput*/ LinkingOutput, CachedResults,<br>
+ /*BuildForOffloadDevice*/ false);<br>
}<br>
<br>
// If the user passed -Qunused-arguments or there were errors, don't warn<br>
@@ -1878,7 +1941,28 @@ void Driver::BuildJobs(Compilation &C) c<br>
}<br>
}<br>
}<br>
-<br>
+/// Collapse an offloading action looking for a job of the given type. The input<br>
+/// action is changed to the input of the collapsed sequence. If we effectively<br>
+/// had a collapse return the corresponding offloading action, otherwise return<br>
+/// null.<br>
+template <typename T><br>
+static OffloadAction *collapseOffloadingAction(Action *&CurAction) {<br>
+ if (!CurAction)<br>
+ return nullptr;<br>
+ if (auto *OA = dyn_cast<OffloadAction>(CurAction)) {<br>
+ if (OA->hasHostDependence())<br>
+ if (auto *HDep = dyn_cast<T>(OA->getHostDependence())) {<br>
+ CurAction = HDep;<br>
+ return OA;<br>
+ }<br>
+ if (OA->hasSingleDeviceDependence())<br>
+ if (auto *DDep = dyn_cast<T>(OA->getSingleDeviceDependence())) {<br>
+ CurAction = DDep;<br>
+ return OA;<br>
+ }<br>
+ }<br>
+ return nullptr;<br>
+}<br>
// Returns a Tool for a given JobAction. In case the action and its<br>
// predecessors can be combined, updates Inputs with the inputs of the<br>
// first combined action. If one of the collapsed actions is a<br>
@@ -1888,34 +1972,39 @@ static const Tool *selectToolForJob(Comp<br>
bool EmbedBitcode, const ToolChain *TC,<br>
const JobAction *JA,<br>
const ActionList *&Inputs,<br>
- const CudaHostAction *&CollapsedCHA) {<br>
+ ActionList &CollapsedOffloadAction) {<br>
const Tool *ToolForJob = nullptr;<br>
- CollapsedCHA = nullptr;<br>
+ CollapsedOffloadAction.clear();<br>
<br>
// See if we should look for a compiler with an integrated assembler. We match<br>
// bottom up, so what we are actually looking for is an assembler job with a<br>
// compiler input.<br>
<br>
+ // Look through offload actions between assembler and backend actions.<br>
+ Action *BackendJA = (isa<AssembleJobAction>(JA) && Inputs->size() == 1)<br>
+ ? *Inputs->begin()<br>
+ : nullptr;<br>
+ auto *BackendOA = collapseOffloadingAction<BackendJobAction>(BackendJA);<br>
+<br>
if (TC->useIntegratedAs() && !SaveTemps &&<br>
!C.getArgs().hasArg(options::OPT_via_file_asm) &&<br>
!C.getArgs().hasArg(options::OPT__SLASH_FA) &&<br>
- !C.getArgs().hasArg(options::OPT__SLASH_Fa) &&<br>
- isa<AssembleJobAction>(JA) && Inputs->size() == 1 &&<br>
- isa<BackendJobAction>(*Inputs->begin())) {<br>
+ !C.getArgs().hasArg(options::OPT__SLASH_Fa) && BackendJA &&<br>
+ isa<BackendJobAction>(BackendJA)) {<br>
// A BackendJob is always preceded by a CompileJob, and without -save-temps<br>
// or -fembed-bitcode, they will always get combined together, so instead of<br>
// checking the backend tool, check if the tool for the CompileJob has an<br>
// integrated assembler. For -fembed-bitcode, CompileJob is still used to<br>
// look up tools for BackendJob, but they need to match before we can split<br>
// them.<br>
- const ActionList *BackendInputs = &(*Inputs)[0]->getInputs();<br>
- // Compile job may be wrapped in CudaHostAction, extract it if<br>
- // that's the case and update CollapsedCHA if we combine phases.<br>
- CudaHostAction *CHA = dyn_cast<CudaHostAction>(*BackendInputs->begin());<br>
- JobAction *CompileJA = cast<CompileJobAction>(<br>
- CHA ? *CHA->input_begin() : *BackendInputs->begin());<br>
- assert(CompileJA && "Backend job is not preceeded by compile job.");<br>
- const Tool *Compiler = TC->SelectTool(*CompileJA);<br>
+<br>
+ // Look through offload actions between backend and compile actions.<br>
+ Action *CompileJA = *BackendJA->getInputs().begin();<br>
+ auto *CompileOA = collapseOffloadingAction<CompileJobAction>(CompileJA);<br>
+<br>
+ assert(CompileJA && isa<CompileJobAction>(CompileJA) &&<br>
+ "Backend job is not preceeded by compile job.");<br>
+ const Tool *Compiler = TC->SelectTool(*cast<CompileJobAction>(CompileJA));<br>
if (!Compiler)<br>
return nullptr;<br>
// When using -fembed-bitcode, it is required to have the same tool (clang)<br>
@@ -1929,7 +2018,12 @@ static const Tool *selectToolForJob(Comp<br>
if (Compiler->hasIntegratedAssembler()) {<br>
Inputs = &CompileJA->getInputs();<br>
ToolForJob = Compiler;<br>
- CollapsedCHA = CHA;<br>
+ // Save the collapsed offload actions because they may still contain<br>
+ // device actions.<br>
+ if (CompileOA)<br>
+ CollapsedOffloadAction.push_back(CompileOA);<br>
+ if (BackendOA)<br>
+ CollapsedOffloadAction.push_back(BackendOA);<br>
}<br>
}<br>
<br>
@@ -1939,20 +2033,23 @@ static const Tool *selectToolForJob(Comp<br>
if (isa<BackendJobAction>(JA)) {<br>
// Check if the compiler supports emitting LLVM IR.<br>
assert(Inputs->size() == 1);<br>
- // Compile job may be wrapped in CudaHostAction, extract it if<br>
- // that's the case and update CollapsedCHA if we combine phases.<br>
- CudaHostAction *CHA = dyn_cast<CudaHostAction>(*Inputs->begin());<br>
- JobAction *CompileJA =<br>
- cast<CompileJobAction>(CHA ? *CHA->input_begin() : *Inputs->begin());<br>
- assert(CompileJA && "Backend job is not preceeded by compile job.");<br>
- const Tool *Compiler = TC->SelectTool(*CompileJA);<br>
+<br>
+ // Look through offload actions between backend and compile actions.<br>
+ Action *CompileJA = *JA->getInputs().begin();<br>
+ auto *CompileOA = collapseOffloadingAction<CompileJobAction>(CompileJA);<br>
+<br>
+ assert(CompileJA && isa<CompileJobAction>(CompileJA) &&<br>
+ "Backend job is not preceeded by compile job.");<br>
+ const Tool *Compiler = TC->SelectTool(*cast<CompileJobAction>(CompileJA));<br>
if (!Compiler)<br>
return nullptr;<br>
if (!Compiler->canEmitIR() ||<br>
(!SaveTemps && !EmbedBitcode)) {<br>
Inputs = &CompileJA->getInputs();<br>
ToolForJob = Compiler;<br>
- CollapsedCHA = CHA;<br>
+<br>
+ if (CompileOA)<br>
+ CollapsedOffloadAction.push_back(CompileOA);<br>
}<br>
}<br>
<br>
@@ -1963,12 +2060,21 @@ static const Tool *selectToolForJob(Comp<br>
// See if we should use an integrated preprocessor. We do so when we have<br>
// exactly one input, since this is the only use case we care about<br>
// (irrelevant since we don't support combine yet).<br>
- if (Inputs->size() == 1 && isa<PreprocessJobAction>(*Inputs->begin()) &&<br>
+<br>
+ // Look through offload actions after preprocessing.<br>
+ Action *PreprocessJA = (Inputs->size() == 1) ? *Inputs->begin() : nullptr;<br>
+ auto *PreprocessOA =<br>
+ collapseOffloadingAction<PreprocessJobAction>(PreprocessJA);<br>
+<br>
+ if (PreprocessJA && isa<PreprocessJobAction>(PreprocessJA) &&<br>
!C.getArgs().hasArg(options::OPT_no_integrated_cpp) &&<br>
!C.getArgs().hasArg(options::OPT_traditional_cpp) && !SaveTemps &&<br>
!C.getArgs().hasArg(options::OPT_rewrite_objc) &&<br>
- ToolForJob->hasIntegratedCPP())<br>
- Inputs = &(*Inputs)[0]->getInputs();<br>
+ ToolForJob->hasIntegratedCPP()) {<br>
+ Inputs = &PreprocessJA->getInputs();<br>
+ if (PreprocessOA)<br>
+ CollapsedOffloadAction.push_back(PreprocessOA);<br>
+ }<br>
<br>
return ToolForJob;<br>
}<br>
@@ -1976,8 +2082,8 @@ static const Tool *selectToolForJob(Comp<br>
InputInfo Driver::BuildJobsForAction(<br>
Compilation &C, const Action *A, const ToolChain *TC, const char *BoundArch,<br>
bool AtTopLevel, bool MultipleArchs, const char *LinkingOutput,<br>
- std::map<std::pair<const Action *, std::string>, InputInfo> &CachedResults)<br>
- const {<br>
+ std::map<std::pair<const Action *, std::string>, InputInfo> &CachedResults,<br>
+ bool BuildForOffloadDevice) const {<br>
// The bound arch is not necessarily represented in the toolchain's triple --<br>
// for example, armv7 and armv7s both map to the same triple -- so we need<br>
// both in our map.<br>
@@ -1991,9 +2097,9 @@ InputInfo Driver::BuildJobsForAction(<br>
if (CachedResult != CachedResults.end()) {<br>
return CachedResult->second;<br>
}<br>
- InputInfo Result =<br>
- BuildJobsForActionNoCache(C, A, TC, BoundArch, AtTopLevel, MultipleArchs,<br>
- LinkingOutput, CachedResults);<br>
+ InputInfo Result = BuildJobsForActionNoCache(<br>
+ C, A, TC, BoundArch, AtTopLevel, MultipleArchs, LinkingOutput,<br>
+ CachedResults, BuildForOffloadDevice);<br>
CachedResults[ActionTC] = Result;<br>
return Result;<br>
}<br>
@@ -2001,21 +2107,65 @@ InputInfo Driver::BuildJobsForAction(<br>
InputInfo Driver::BuildJobsForActionNoCache(<br>
Compilation &C, const Action *A, const ToolChain *TC, const char *BoundArch,<br>
bool AtTopLevel, bool MultipleArchs, const char *LinkingOutput,<br>
- std::map<std::pair<const Action *, std::string>, InputInfo> &CachedResults)<br>
- const {<br>
+ std::map<std::pair<const Action *, std::string>, InputInfo> &CachedResults,<br>
+ bool BuildForOffloadDevice) const {<br>
llvm::PrettyStackTraceString CrashInfo("Building compilation jobs");<br>
<br>
- InputInfoList CudaDeviceInputInfos;<br>
- if (const CudaHostAction *CHA = dyn_cast<CudaHostAction>(A)) {<br>
- // Append outputs of device jobs to the input list.<br>
- for (const Action *DA : CHA->getDeviceActions()) {<br>
- CudaDeviceInputInfos.push_back(BuildJobsForAction(<br>
- C, DA, TC, nullptr, AtTopLevel,<br>
- /*MultipleArchs*/ false, LinkingOutput, CachedResults));<br>
- }<br>
- // Override current action with a real host compile action and continue<br>
- // processing it.<br>
- A = *CHA->input_begin();<br>
+ InputInfoList OffloadDependencesInputInfo;<br>
+ if (const OffloadAction *OA = dyn_cast<OffloadAction>(A)) {<br>
+ // The offload action is expected to be used in four different situations.<br>
+ //<br>
+ // a) Set a toolchain/architecture/kind for a host action:<br>
+ // Host Action 1 -> OffloadAction -> Host Action 2<br>
+ //<br>
+ // b) Set a toolchain/architecture/kind for a device action;<br>
+ // Device Action 1 -> OffloadAction -> Device Action 2<br>
+ //<br>
+ // c) Specify a device dependences to a host action;<br>
+ // Device Action 1 _<br>
+ // \<br>
+ // Host Action 1 ---> OffloadAction -> Host Action 2<br>
+ //<br>
+ // d) Specify a host dependence to a device action.<br>
+ // Host Action 1 _<br>
+ // \<br>
+ // Device Action 1 ---> OffloadAction -> Device Action 2<br>
+ //<br>
+ // For a) and b), we just return the job generated for the dependence. For<br>
+ // c) and d) we override the current action with the host/device dependence<br>
+ // if the current toolchain is host/device and set the offload dependences<br>
+ // info with the jobs obtained from the device/host dependence(s).<br>
+<br>
+ // If there is a single device option, just generate the job for it.<br>
+ if (OA->hasSingleDeviceDependence()) {<br>
+ InputInfo DevA;<br>
+ OA->doOnEachDeviceDependence([&](Action *DepA, const ToolChain *DepTC,<br>
+ const char *DepBoundArch) {<br>
+ DevA =<br>
+ BuildJobsForAction(C, DepA, DepTC, DepBoundArch, AtTopLevel,<br>
+ /*MultipleArchs*/ !!DepBoundArch, LinkingOutput,<br>
+ CachedResults, /*BuildForOffloadDevice=*/true);<br>
+ });<br>
+ return DevA;<br>
+ }<br>
+<br>
+ // If 'Action 2' is host, we generate jobs for the device dependences and<br>
+ // override the current action with the host dependence. Otherwise, we<br>
+ // generate the host dependences and override the action with the device<br>
+ // dependence. The dependences can't therefore be a top-level action.<br>
+ OA->doOnEachDependence(<br>
+ /*IsHostDependence=*/BuildForOffloadDevice,<br>
+ [&](Action *DepA, const ToolChain *DepTC, const char *DepBoundArch) {<br>
+ OffloadDependencesInputInfo.push_back(BuildJobsForAction(<br>
+ C, DepA, DepTC, DepBoundArch, /*AtTopLevel=*/false,<br>
+ /*MultipleArchs*/ !!DepBoundArch, LinkingOutput, CachedResults,<br>
+ /*BuildForOffloadDevice=*/DepA->getOffloadingDeviceKind() !=<br>
+ Action::OFK_None));<br>
+ });<br>
+<br>
+ A = BuildForOffloadDevice<br>
+ ? OA->getSingleDeviceDependence(/*DoNotConsiderHostActions=*/true)<br>
+ : OA->getHostDependence();<br>
}<br>
<br>
if (const InputAction *IA = dyn_cast<InputAction>(A)) {<br>
@@ -2042,41 +2192,34 @@ InputInfo Driver::BuildJobsForActionNoCa<br>
TC = &C.getDefaultToolChain();<br>
<br>
return BuildJobsForAction(C, *BAA->input_begin(), TC, ArchName, AtTopLevel,<br>
- MultipleArchs, LinkingOutput, CachedResults);<br>
+ MultipleArchs, LinkingOutput, CachedResults,<br>
+ BuildForOffloadDevice);<br>
}<br>
<br>
- if (const CudaDeviceAction *CDA = dyn_cast<CudaDeviceAction>(A)) {<br>
- // Initial processing of CudaDeviceAction carries host params.<br>
- // Call BuildJobsForAction() again, now with correct device parameters.<br>
- InputInfo II = BuildJobsForAction(<br>
- C, *CDA->input_begin(), C.getSingleOffloadToolChain<Action::OFK_Cuda>(),<br>
- CudaArchToString(CDA->getGpuArch()), CDA->isAtTopLevel(),<br>
- /*MultipleArchs=*/true, LinkingOutput, CachedResults);<br>
- // Currently II's Action is *CDA->input_begin(). Set it to CDA instead, so<br>
- // that one can retrieve II's GPU arch.<br>
- II.setAction(A);<br>
- return II;<br>
- }<br>
<br>
const ActionList *Inputs = &A->getInputs();<br>
<br>
const JobAction *JA = cast<JobAction>(A);<br>
- const CudaHostAction *CollapsedCHA = nullptr;<br>
+ ActionList CollapsedOffloadActions;<br>
+<br>
const Tool *T =<br>
selectToolForJob(C, isSaveTempsEnabled(), embedBitcodeEnabled(), TC, JA,<br>
- Inputs, CollapsedCHA);<br>
+ Inputs, CollapsedOffloadActions);<br>
if (!T)<br>
return InputInfo();<br>
<br>
- // If we've collapsed action list that contained CudaHostAction we<br>
- // need to build jobs for device-side inputs it may have held.<br>
- if (CollapsedCHA) {<br>
- for (const Action *DA : CollapsedCHA->getDeviceActions()) {<br>
- CudaDeviceInputInfos.push_back(BuildJobsForAction(<br>
- C, DA, TC, "", AtTopLevel,<br>
- /*MultipleArchs*/ false, LinkingOutput, CachedResults));<br>
- }<br>
- }<br>
+ // If we've collapsed action list that contained OffloadAction we<br>
+ // need to build jobs for host/device-side inputs it may have held.<br>
+ for (const auto *OA : CollapsedOffloadActions)<br>
+ cast<OffloadAction>(OA)->doOnEachDependence(<br>
+ /*IsHostDependence=*/BuildForOffloadDevice,<br>
+ [&](Action *DepA, const ToolChain *DepTC, const char *DepBoundArch) {<br>
+ OffloadDependencesInputInfo.push_back(BuildJobsForAction(<br>
+ C, DepA, DepTC, DepBoundArch, AtTopLevel,<br>
+ /*MultipleArchs=*/!!DepBoundArch, LinkingOutput, CachedResults,<br>
+ /*BuildForOffloadDevice=*/DepA->getOffloadingDeviceKind() !=<br>
+ Action::OFK_None));<br>
+ });<br>
<br>
// Only use pipes when there is exactly one input.<br>
InputInfoList InputInfos;<br>
@@ -2086,9 +2229,9 @@ InputInfo Driver::BuildJobsForActionNoCa<br>
// FIXME: Clean this up.<br>
bool SubJobAtTopLevel =<br>
AtTopLevel && (isa<DsymutilJobAction>(A) || isa<VerifyJobAction>(A));<br>
- InputInfos.push_back(BuildJobsForAction(C, Input, TC, BoundArch,<br>
- SubJobAtTopLevel, MultipleArchs,<br>
- LinkingOutput, CachedResults));<br>
+ InputInfos.push_back(BuildJobsForAction(<br>
+ C, Input, TC, BoundArch, SubJobAtTopLevel, MultipleArchs, LinkingOutput,<br>
+ CachedResults, BuildForOffloadDevice));<br>
}<br>
<br>
// Always use the first input as the base input.<br>
@@ -2099,9 +2242,10 @@ InputInfo Driver::BuildJobsForActionNoCa<br>
if (JA->getType() == types::TY_dSYM)<br>
BaseInput = InputInfos[0].getFilename();<br>
<br>
- // Append outputs of cuda device jobs to the input list<br>
- if (CudaDeviceInputInfos.size())<br>
- InputInfos.append(CudaDeviceInputInfos.begin(), CudaDeviceInputInfos.end());<br>
+ // Append outputs of offload device jobs to the input list<br>
+ if (!OffloadDependencesInputInfo.empty())<br>
+ InputInfos.append(OffloadDependencesInputInfo.begin(),<br>
+ OffloadDependencesInputInfo.end());<br>
<br>
// Determine the place to write output to, if any.<br>
InputInfo Result;<br>
@@ -2109,7 +2253,8 @@ InputInfo Driver::BuildJobsForActionNoCa<br>
Result = InputInfo(A, BaseInput);<br>
else<br>
Result = InputInfo(A, GetNamedOutputPath(C, *JA, BaseInput, BoundArch,<br>
- AtTopLevel, MultipleArchs),<br>
+ AtTopLevel, MultipleArchs,<br>
+ TC->getTriple().normalize()),<br>
BaseInput);<br>
<br>
if (CCCPrintBindings && !CCGenDiagnostics) {<br>
@@ -2169,7 +2314,8 @@ static const char *MakeCLOutputFilename(<br>
const char *Driver::GetNamedOutputPath(Compilation &C, const JobAction &JA,<br>
const char *BaseInput,<br>
const char *BoundArch, bool AtTopLevel,<br>
- bool MultipleArchs) const {<br>
+ bool MultipleArchs,<br>
+ StringRef NormalizedTriple) const {<br>
llvm::PrettyStackTraceString CrashInfo("Computing output path");<br>
// Output to a user requested destination?<br>
if (AtTopLevel && !isa<DsymutilJobAction>(JA) && !isa<VerifyJobAction>(JA)) {<br>
@@ -2255,6 +2401,7 @@ const char *Driver::GetNamedOutputPath(C<br>
MakeCLOutputFilename(C.getArgs(), "", BaseName, types::TY_Image);<br>
} else if (MultipleArchs && BoundArch) {<br>
SmallString<128> Output(getDefaultImageName());<br>
+ Output += JA.getOffloadingFileNamePrefix(NormalizedTriple);<br>
Output += "-";<br>
Output.append(BoundArch);<br>
NamedOutput = C.getArgs().MakeArgString(Output.c_str());<br>
@@ -2271,6 +2418,7 @@ const char *Driver::GetNamedOutputPath(C<br>
if (!types::appendSuffixForType(JA.getType()))<br>
End = BaseName.rfind('.');<br>
SmallString<128> Suffixed(BaseName.substr(0, End));<br>
+ Suffixed += JA.getOffloadingFileNamePrefix(NormalizedTriple);<br>
if (MultipleArchs && BoundArch) {<br>
Suffixed += "-";<br>
Suffixed.append(BoundArch);<br>
<br>
Modified: cfe/trunk/lib/Driver/ToolChain.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Driver/ToolChain.cpp?rev=275645&r1=275644&r2=275645&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Driver/ToolChain.cpp?rev=275645&r1=275644&r2=275645&view=diff</a><br>
==============================================================================<br>
--- cfe/trunk/lib/Driver/ToolChain.cpp (original)<br>
+++ cfe/trunk/lib/Driver/ToolChain.cpp Fri Jul 15 18:13:27 2016<br>
@@ -248,8 +248,7 @@ Tool *ToolChain::getTool(Action::ActionC<br>
<br>
case Action::InputClass:<br>
case Action::BindArchClass:<br>
- case Action::CudaDeviceClass:<br>
- case Action::CudaHostClass:<br>
+ case Action::OffloadClass:<br>
case Action::LipoJobClass:<br>
case Action::DsymutilJobClass:<br>
case Action::VerifyDebugInfoJobClass:<br>
<br>
Modified: cfe/trunk/lib/Driver/Tools.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Driver/Tools.cpp?rev=275645&r1=275644&r2=275645&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Driver/Tools.cpp?rev=275645&r1=275644&r2=275645&view=diff</a><br>
==============================================================================<br>
--- cfe/trunk/lib/Driver/Tools.cpp (original)<br>
+++ cfe/trunk/lib/Driver/Tools.cpp Fri Jul 15 18:13:27 2016<br>
@@ -296,12 +296,45 @@ static bool forwardToGCC(const Option &O<br>
!O.hasFlag(options::DriverOption) && !O.hasFlag(options::LinkerInput);<br>
}<br>
<br>
+/// Add the C++ include args of other offloading toolchains. If this is a host<br>
+/// job, the device toolchains are added. If this is a device job, the host<br>
+/// toolchains will be added.<br>
+static void addExtraOffloadCXXStdlibIncludeArgs(Compilation &C,<br>
+ const JobAction &JA,<br>
+ const ArgList &Args,<br>
+ ArgStringList &CmdArgs) {<br>
+<br>
+ if (JA.isHostOffloading(Action::OFK_Cuda))<br>
+ C.getSingleOffloadToolChain<Action::OFK_Cuda>()<br>
+ ->AddClangCXXStdlibIncludeArgs(Args, CmdArgs);<br>
+ else if (JA.isDeviceOffloading(Action::OFK_Cuda))<br>
+ C.getSingleOffloadToolChain<Action::OFK_Host>()<br>
+ ->AddClangCXXStdlibIncludeArgs(Args, CmdArgs);<br>
+<br>
+ // TODO: Add support for other programming models here.<br>
+}<br>
+<br>
+/// Add the include args that are specific of each offloading programming model.<br>
+static void addExtraOffloadSpecificIncludeArgs(Compilation &C,<br>
+ const JobAction &JA,<br>
+ const ArgList &Args,<br>
+ ArgStringList &CmdArgs) {<br>
+<br>
+ if (JA.isHostOffloading(Action::OFK_Cuda))<br>
+ C.getSingleOffloadToolChain<Action::OFK_Host>()->AddCudaIncludeArgs(<br>
+ Args, CmdArgs);<br>
+ else if (JA.isDeviceOffloading(Action::OFK_Cuda))<br>
+ C.getSingleOffloadToolChain<Action::OFK_Cuda>()->AddCudaIncludeArgs(<br>
+ Args, CmdArgs);<br>
+<br>
+ // TODO: Add support for other programming models here.<br>
+}<br>
+<br>
void Clang::AddPreprocessingOptions(Compilation &C, const JobAction &JA,<br>
const Driver &D, const ArgList &Args,<br>
ArgStringList &CmdArgs,<br>
const InputInfo &Output,<br>
- const InputInfoList &Inputs,<br>
- const ToolChain *AuxToolChain) const {<br>
+ const InputInfoList &Inputs) const {<br>
Arg *A;<br>
const bool IsIAMCU = getToolChain().getTriple().isOSIAMCU();<br>
<br>
@@ -566,31 +599,27 @@ void Clang::AddPreprocessingOptions(Comp<br>
// OBJCPLUS_INCLUDE_PATH - system includes enabled when compiling ObjC++.<br>
addDirectoryList(Args, CmdArgs, "-objcxx-isystem", "OBJCPLUS_INCLUDE_PATH");<br>
<br>
- // Optional AuxToolChain indicates that we need to include headers<br>
- // for more than one target. If that's the case, add include paths<br>
- // from AuxToolChain right after include paths of the same kind for<br>
- // the current target.<br>
+ // While adding the include arguments, we also attempt to retrieve the<br>
+ // arguments of related offloading toolchains or arguments that are specific<br>
+ // of an offloading programming model.<br>
<br>
// Add C++ include arguments, if needed.<br>
if (types::isCXX(Inputs[0].getType())) {<br>
getToolChain().AddClangCXXStdlibIncludeArgs(Args, CmdArgs);<br>
- if (AuxToolChain)<br>
- AuxToolChain->AddClangCXXStdlibIncludeArgs(Args, CmdArgs);<br>
+ addExtraOffloadCXXStdlibIncludeArgs(C, JA, Args, CmdArgs);<br>
}<br>
<br>
// Add system include arguments for all targets but IAMCU.<br>
if (!IsIAMCU) {<br>
getToolChain().AddClangSystemIncludeArgs(Args, CmdArgs);<br>
- if (AuxToolChain)<br>
- AuxToolChain->AddClangCXXStdlibIncludeArgs(Args, CmdArgs);<br>
+ addExtraOffloadCXXStdlibIncludeArgs(C, JA, Args, CmdArgs);<br>
} else {<br>
// For IAMCU add special include arguments.<br>
getToolChain().AddIAMCUIncludeArgs(Args, CmdArgs);<br>
}<br>
<br>
- // Add CUDA include arguments, if needed.<br>
- if (types::isCuda(Inputs[0].getType()))<br>
- getToolChain().AddCudaIncludeArgs(Args, CmdArgs);<br>
+ // Add offload include arguments, if needed.<br>
+ addExtraOffloadSpecificIncludeArgs(C, JA, Args, CmdArgs);<br>
}<br>
<br>
// FIXME: Move to target hook.<br>
@@ -3799,7 +3828,7 @@ void Clang::ConstructJob(Compilation &C,<br>
// CUDA compilation may have multiple inputs (source file + results of<br>
// device-side compilations). All other jobs are expected to have exactly one<br>
// input.<br>
- bool IsCuda = types::isCuda(Input.getType());<br>
+ bool IsCuda = JA.isOffloading(Action::OFK_Cuda);<br>
assert((IsCuda || Inputs.size() == 1) && "Unable to handle multiple inputs.");<br>
<br>
// C++ is not supported for IAMCU.<br>
@@ -3815,21 +3844,21 @@ void Clang::ConstructJob(Compilation &C,<br>
CmdArgs.push_back("-triple");<br>
CmdArgs.push_back(Args.MakeArgString(TripleStr));<br>
<br>
- const ToolChain *AuxToolChain = nullptr;<br>
if (IsCuda) {<br>
- // FIXME: We need a (better) way to pass information about<br>
- // particular compilation pass we're constructing here. For now we<br>
- // can check which toolchain we're using and pick the other one to<br>
- // extract the triple.<br>
- if (&getToolChain() == C.getSingleOffloadToolChain<Action::OFK_Cuda>())<br>
- AuxToolChain = C.getOffloadingHostToolChain();<br>
- else if (&getToolChain() == C.getOffloadingHostToolChain())<br>
- AuxToolChain = C.getSingleOffloadToolChain<Action::OFK_Cuda>();<br>
- else<br>
- llvm_unreachable("Can't figure out CUDA compilation mode.");<br>
- assert(AuxToolChain != nullptr && "No aux toolchain.");<br>
+ // We have to pass the triple of the host if compiling for a CUDA device and<br>
+ // vice-versa.<br>
+ StringRef NormalizedTriple;<br>
+ if (JA.isDeviceOffloading(Action::OFK_Cuda))<br>
+ NormalizedTriple = C.getSingleOffloadToolChain<Action::OFK_Host>()<br>
+ ->getTriple()<br>
+ .normalize();<br>
+ else<br>
+ NormalizedTriple = C.getSingleOffloadToolChain<Action::OFK_Cuda>()<br>
+ ->getTriple()<br>
+ .normalize();<br>
+<br>
CmdArgs.push_back("-aux-triple");<br>
- CmdArgs.push_back(Args.MakeArgString(AuxToolChain->getTriple().str()));<br>
+ CmdArgs.push_back(Args.MakeArgString(NormalizedTriple));<br>
}<br>
<br>
if (Triple.isOSWindows() && (Triple.getArch() == llvm::Triple::arm ||<br>
@@ -4718,8 +4747,7 @@ void Clang::ConstructJob(Compilation &C,<br>
//<br>
// FIXME: Support -fpreprocessed<br>
if (types::getPreprocessedType(InputType) != types::TY_INVALID)<br>
- AddPreprocessingOptions(C, JA, D, Args, CmdArgs, Output, Inputs,<br>
- AuxToolChain);<br>
+ AddPreprocessingOptions(C, JA, D, Args, CmdArgs, Output, Inputs);<br>
<br>
// Don't warn about "clang -c -DPIC -fPIC test.i" because libtool.m4 assumes<br>
// that "The compiler can only warn and ignore the option if not recognized".<br>
@@ -11193,15 +11221,14 @@ void NVPTX::Assembler::ConstructJob(Comp<br>
static_cast<const toolchains::CudaToolChain &>(getToolChain());<br>
assert(TC.getTriple().isNVPTX() && "Wrong platform");<br>
<br>
- std::vector<std::string> gpu_archs =<br>
- Args.getAllArgValues(options::OPT_march_EQ);<br>
- assert(gpu_archs.size() == 1 && "Exactly one GPU Arch required for ptxas.");<br>
- const std::string& gpu_arch = gpu_archs[0];<br>
+ // Obtain architecture from the action.<br>
+ CudaArch gpu_arch = StringToCudaArch(JA.getOffloadingArch());<br>
+ assert(gpu_arch != CudaArch::UNKNOWN &&<br>
+ "Device action expected to have an architecture.");<br>
<br>
// Check that our installation's ptxas supports gpu_arch.<br>
if (!Args.hasArg(options::OPT_no_cuda_version_check)) {<br>
- TC.cudaInstallation().CheckCudaVersionSupportsArch(<br>
- StringToCudaArch(gpu_arch));<br>
+ TC.cudaInstallation().CheckCudaVersionSupportsArch(gpu_arch);<br>
}<br>
<br>
ArgStringList CmdArgs;<br>
@@ -11245,7 +11272,7 @@ void NVPTX::Assembler::ConstructJob(Comp<br>
}<br>
<br>
CmdArgs.push_back("--gpu-name");<br>
- CmdArgs.push_back(Args.MakeArgString(gpu_arch));<br>
+ CmdArgs.push_back(Args.MakeArgString(CudaArchToString(gpu_arch)));<br>
CmdArgs.push_back("--output-file");<br>
CmdArgs.push_back(Args.MakeArgString(Output.getFilename()));<br>
for (const auto& II : Inputs)<br>
@@ -11277,13 +11304,20 @@ void NVPTX::Linker::ConstructJob(Compila<br>
CmdArgs.push_back(Args.MakeArgString(Output.getFilename()));<br>
<br>
for (const auto& II : Inputs) {<br>
- auto* A = cast<const CudaDeviceAction>(II.getAction());<br>
+ auto *A = II.getAction();<br>
+ assert(A->getInputs().size() == 1 &&<br>
+ "Device offload action is expected to have a single input");<br>
+ const char *gpu_arch_str = A->getOffloadingArch();<br>
+ assert(gpu_arch_str &&<br>
+ "Device action expected to have associated a GPU architecture!");<br>
+ CudaArch gpu_arch = StringToCudaArch(gpu_arch_str);<br>
+<br>
// We need to pass an Arch of the form "sm_XX" for cubin files and<br>
// "compute_XX" for ptx.<br>
const char *Arch =<br>
(II.getType() == types::TY_PP_Asm)<br>
- ? CudaVirtualArchToString(VirtualArchForCudaArch(A->getGpuArch()))<br>
- : CudaArchToString(A->getGpuArch());<br>
+ ? CudaVirtualArchToString(VirtualArchForCudaArch(gpu_arch))<br>
+ : gpu_arch_str;<br>
CmdArgs.push_back(Args.MakeArgString(llvm::Twine("--image=profile=") +<br>
Arch + ",file=" + II.getFilename()));<br>
}<br>
<br>
Modified: cfe/trunk/lib/Driver/Tools.h<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Driver/Tools.h?rev=275645&r1=275644&r2=275645&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Driver/Tools.h?rev=275645&r1=275644&r2=275645&view=diff</a><br>
==============================================================================<br>
--- cfe/trunk/lib/Driver/Tools.h (original)<br>
+++ cfe/trunk/lib/Driver/Tools.h Fri Jul 15 18:13:27 2016<br>
@@ -57,8 +57,7 @@ private:<br>
const Driver &D, const llvm::opt::ArgList &Args,<br>
llvm::opt::ArgStringList &CmdArgs,<br>
const InputInfo &Output,<br>
- const InputInfoList &Inputs,<br>
- const ToolChain *AuxToolChain) const;<br>
+ const InputInfoList &Inputs) const;<br>
<br>
void AddAArch64TargetArgs(const llvm::opt::ArgList &Args,<br>
llvm::opt::ArgStringList &CmdArgs) const;<br>
<br>
Modified: cfe/trunk/lib/Frontend/CreateInvocationFromCommandLine.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Frontend/CreateInvocationFromCommandLine.cpp?rev=275645&r1=275644&r2=275645&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/cfe/trunk/lib/Frontend/CreateInvocationFromCommandLine.cpp?rev=275645&r1=275644&r2=275645&view=diff</a><br>
==============================================================================<br>
--- cfe/trunk/lib/Frontend/CreateInvocationFromCommandLine.cpp (original)<br>
+++ cfe/trunk/lib/Frontend/CreateInvocationFromCommandLine.cpp Fri Jul 15 18:13:27 2016<br>
@@ -60,25 +60,25 @@ clang::createInvocationFromCommandLine(A<br>
}<br>
<br>
// We expect to get back exactly one command job, if we didn't something<br>
- // failed. CUDA compilation is an exception as it creates multiple jobs. If<br>
- // that's the case, we proceed with the first job. If caller needs particular<br>
- // CUDA job, it should be controlled via --cuda-{host|device}-only option<br>
- // passed to the driver.<br>
+ // failed. Offload compilation is an exception as it creates multiple jobs. If<br>
+ // that's the case, we proceed with the first job. If caller needs a<br>
+ // particular job, it should be controlled via options (e.g.<br>
+ // --cuda-{host|device}-only for CUDA) passed to the driver.<br>
const driver::JobList &Jobs = C->getJobs();<br>
- bool CudaCompilation = false;<br>
+ bool OffloadCompilation = false;<br>
if (Jobs.size() > 1) {<br>
for (auto &A : C->getActions()){<br>
// On MacOSX real actions may end up being wrapped in BindArchAction<br>
if (isa<driver::BindArchAction>(A))<br>
A = *A->input_begin();<br>
- if (isa<driver::CudaDeviceAction>(A)) {<br>
- CudaCompilation = true;<br>
+ if (isa<driver::OffloadAction>(A)) {<br>
+ OffloadCompilation = true;<br>
break;<br>
}<br>
}<br>
}<br>
if (Jobs.size() == 0 || !isa<driver::Command>(*Jobs.begin()) ||<br>
- (Jobs.size() > 1 && !CudaCompilation)) {<br>
+ (Jobs.size() > 1 && !OffloadCompilation)) {<br>
SmallString<256> Msg;<br>
llvm::raw_svector_ostream OS(Msg);<br>
Jobs.Print(OS, "; ", true);<br>
<br>
Added: cfe/trunk/test/Driver/<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a><br>
URL: <a href="http://llvm.org/viewvc/llvm-project/cfe/trunk/test/Driver/cuda_phases.cu?rev=275645&view=auto" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/cfe/trunk/test/Driver/cuda_phases.cu?rev=275645&view=auto</a><br>
==============================================================================<br>
--- cfe/trunk/test/Driver/<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a> (added)<br>
+++ cfe/trunk/test/Driver/<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a> Fri Jul 15 18:13:27 2016<br>
@@ -0,0 +1,206 @@<br>
+// Tests the phases generated for a CUDA offloading target for different<br>
+// combinations of:<br>
+// - Number of gpu architectures;<br>
+// - Host/device-only compilation;<br>
+// - User-requested final phase - binary or assembly.<br>
+<br>
+// REQUIRES: clang-driver<br>
+// REQUIRES: powerpc-registered-target<br>
+// REQUIRES: nvptx-registered-target<br>
+<br>
+//<br>
+// Test single gpu architecture with complete compilation.<br>
+//<br>
+// RUN: %clang -target powerpc64le-ibm-linux-gnu -ccc-print-phases --cuda-gpu-arch=sm_30 %s 2>&1 \<br>
+// RUN: | FileCheck -check-prefix=BIN %s<br>
+// BIN: 0: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (host-cuda)<br>
+// BIN: 1: preprocessor, {0}, cuda-cpp-output, (host-cuda)<br>
+// BIN: 2: compiler, {1}, ir, (host-cuda)<br>
+// BIN: 3: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (device-cuda, sm_30)<br>
+// BIN: 4: preprocessor, {3}, cuda-cpp-output, (device-cuda, sm_30)<br>
+// BIN: 5: compiler, {4}, ir, (device-cuda, sm_30)<br>
+// BIN: 6: backend, {5}, assembler, (device-cuda, sm_30)<br>
+// BIN: 7: assembler, {6}, object, (device-cuda, sm_30)<br>
+// BIN: 8: offload, "device-cuda (nvptx64-nvidia-cuda:sm_30)" {7}, object<br>
+// BIN: 9: offload, "device-cuda (nvptx64-nvidia-cuda:sm_30)" {6}, assembler<br>
+// BIN: 10: linker, {8, 9}, cuda-fatbin, (device-cuda)<br>
+// BIN: 11: offload, "host-cuda (powerpc64le-ibm-linux-gnu)" {2}, "device-cuda (nvptx64-nvidia-cuda)" {10}, ir<br>
+// BIN: 12: backend, {11}, assembler, (host-cuda)<br>
+// BIN: 13: assembler, {12}, object, (host-cuda)<br>
+// BIN: 14: linker, {13}, image, (host-cuda)<br>
+<br>
+//<br>
+// Test single gpu architecture up to the assemble phase.<br>
+//<br>
+// RUN: %clang -target powerpc64le-ibm-linux-gnu -ccc-print-phases --cuda-gpu-arch=sm_30 %s -S 2>&1 \<br>
+// RUN: | FileCheck -check-prefix=ASM %s<br>
+// ASM: 0: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (device-cuda, sm_30)<br>
+// ASM: 1: preprocessor, {0}, cuda-cpp-output, (device-cuda, sm_30)<br>
+// ASM: 2: compiler, {1}, ir, (device-cuda, sm_30)<br>
+// ASM: 3: backend, {2}, assembler, (device-cuda, sm_30)<br>
+// ASM: 4: offload, "device-cuda (nvptx64-nvidia-cuda:sm_30)" {3}, assembler<br>
+// ASM: 5: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (host-cuda)<br>
+// ASM: 6: preprocessor, {5}, cuda-cpp-output, (host-cuda)<br>
+// ASM: 7: compiler, {6}, ir, (host-cuda)<br>
+// ASM: 8: backend, {7}, assembler, (host-cuda)<br>
+<br>
+//<br>
+// Test two gpu architectures with complete compilation.<br>
+//<br>
+// RUN: %clang -target powerpc64le-ibm-linux-gnu -ccc-print-phases --cuda-gpu-arch=sm_30 --cuda-gpu-arch=sm_35 %s 2>&1 \<br>
+// RUN: | FileCheck -check-prefix=BIN2 %s<br>
+// BIN2: 0: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (host-cuda)<br>
+// BIN2: 1: preprocessor, {0}, cuda-cpp-output, (host-cuda)<br>
+// BIN2: 2: compiler, {1}, ir, (host-cuda)<br>
+// BIN2: 3: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (device-cuda, sm_30)<br>
+// BIN2: 4: preprocessor, {3}, cuda-cpp-output, (device-cuda, sm_30)<br>
+// BIN2: 5: compiler, {4}, ir, (device-cuda, sm_30)<br>
+// BIN2: 6: backend, {5}, assembler, (device-cuda, sm_30)<br>
+// BIN2: 7: assembler, {6}, object, (device-cuda, sm_30)<br>
+// BIN2: 8: offload, "device-cuda (nvptx64-nvidia-cuda:sm_30)" {7}, object<br>
+// BIN2: 9: offload, "device-cuda (nvptx64-nvidia-cuda:sm_30)" {6}, assembler<br>
+// BIN2: 10: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (device-cuda, sm_35)<br>
+// BIN2: 11: preprocessor, {10}, cuda-cpp-output, (device-cuda, sm_35)<br>
+// BIN2: 12: compiler, {11}, ir, (device-cuda, sm_35)<br>
+// BIN2: 13: backend, {12}, assembler, (device-cuda, sm_35)<br>
+// BIN2: 14: assembler, {13}, object, (device-cuda, sm_35)<br>
+// BIN2: 15: offload, "device-cuda (nvptx64-nvidia-cuda:sm_35)" {14}, object<br>
+// BIN2: 16: offload, "device-cuda (nvptx64-nvidia-cuda:sm_35)" {13}, assembler<br>
+// BIN2: 17: linker, {8, 9, 15, 16}, cuda-fatbin, (device-cuda)<br>
+// BIN2: 18: offload, "host-cuda (powerpc64le-ibm-linux-gnu)" {2}, "device-cuda (nvptx64-nvidia-cuda)" {17}, ir<br>
+// BIN2: 19: backend, {18}, assembler, (host-cuda)<br>
+// BIN2: 20: assembler, {19}, object, (host-cuda)<br>
+// BIN2: 21: linker, {20}, image, (host-cuda)<br>
+<br>
+//<br>
+// Test two gpu architecturess up to the assemble phase.<br>
+//<br>
+// RUN: %clang -target powerpc64le-ibm-linux-gnu -ccc-print-phases --cuda-gpu-arch=sm_30 --cuda-gpu-arch=sm_35 %s -S 2>&1 \<br>
+// RUN: | FileCheck -check-prefix=ASM2 %s<br>
+// ASM2: 0: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (device-cuda, sm_30)<br>
+// ASM2: 1: preprocessor, {0}, cuda-cpp-output, (device-cuda, sm_30)<br>
+// ASM2: 2: compiler, {1}, ir, (device-cuda, sm_30)<br>
+// ASM2: 3: backend, {2}, assembler, (device-cuda, sm_30)<br>
+// ASM2: 4: offload, "device-cuda (nvptx64-nvidia-cuda:sm_30)" {3}, assembler<br>
+// ASM2: 5: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (device-cuda, sm_35)<br>
+// ASM2: 6: preprocessor, {5}, cuda-cpp-output, (device-cuda, sm_35)<br>
+// ASM2: 7: compiler, {6}, ir, (device-cuda, sm_35)<br>
+// ASM2: 8: backend, {7}, assembler, (device-cuda, sm_35)<br>
+// ASM2: 9: offload, "device-cuda (nvptx64-nvidia-cuda:sm_35)" {8}, assembler<br>
+// ASM2: 10: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (host-cuda)<br>
+// ASM2: 11: preprocessor, {10}, cuda-cpp-output, (host-cuda)<br>
+// ASM2: 12: compiler, {11}, ir, (host-cuda)<br>
+// ASM2: 13: backend, {12}, assembler, (host-cuda)<br>
+<br>
+//<br>
+// Test single gpu architecture with complete compilation in host-only<br>
+// compilation mode.<br>
+//<br>
+// RUN: %clang -target powerpc64le-ibm-linux-gnu -ccc-print-phases --cuda-gpu-arch=sm_30 %s --cuda-host-only 2>&1 \<br>
+// RUN: | FileCheck -check-prefix=HBIN %s<br>
+// HBIN: 0: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (host-cuda)<br>
+// HBIN: 1: preprocessor, {0}, cuda-cpp-output, (host-cuda)<br>
+// HBIN: 2: compiler, {1}, ir, (host-cuda)<br>
+// HBIN: 3: offload, "host-cuda (powerpc64le-ibm-linux-gnu)" {2}, ir<br>
+// HBIN: 4: backend, {3}, assembler, (host-cuda)<br>
+// HBIN: 5: assembler, {4}, object, (host-cuda)<br>
+// HBIN: 6: linker, {5}, image, (host-cuda)<br>
+<br>
+//<br>
+// Test single gpu architecture up to the assemble phase in host-only<br>
+// compilation mode.<br>
+//<br>
+// RUN: %clang -target powerpc64le-ibm-linux-gnu -ccc-print-phases --cuda-gpu-arch=sm_30 %s --cuda-host-only -S 2>&1 \<br>
+// RUN: | FileCheck -check-prefix=HASM %s<br>
+// HASM: 0: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (host-cuda)<br>
+// HASM: 1: preprocessor, {0}, cuda-cpp-output, (host-cuda)<br>
+// HASM: 2: compiler, {1}, ir, (host-cuda)<br>
+// HASM: 3: offload, "host-cuda (powerpc64le-ibm-linux-gnu)" {2}, ir<br>
+// HASM: 4: backend, {3}, assembler, (host-cuda)<br>
+<br>
+//<br>
+// Test two gpu architectures with complete compilation in host-only<br>
+// compilation mode.<br>
+//<br>
+// RUN: %clang -target powerpc64le-ibm-linux-gnu -ccc-print-phases --cuda-gpu-arch=sm_30 --cuda-gpu-arch=sm_35 %s --cuda-host-only 2>&1 \<br>
+// RUN: | FileCheck -check-prefix=HBIN2 %s<br>
+// HBIN2: 0: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (host-cuda)<br>
+// HBIN2: 1: preprocessor, {0}, cuda-cpp-output, (host-cuda)<br>
+// HBIN2: 2: compiler, {1}, ir, (host-cuda)<br>
+// HBIN2: 3: offload, "host-cuda (powerpc64le-ibm-linux-gnu)" {2}, ir<br>
+// HBIN2: 4: backend, {3}, assembler, (host-cuda)<br>
+// HBIN2: 5: assembler, {4}, object, (host-cuda)<br>
+// HBIN2: 6: linker, {5}, image, (host-cuda)<br>
+<br>
+//<br>
+// Test two gpu architectures up to the assemble phase in host-only<br>
+// compilation mode.<br>
+//<br>
+// RUN: %clang -target powerpc64le-ibm-linux-gnu -ccc-print-phases --cuda-gpu-arch=sm_30 --cuda-gpu-arch=sm_35 %s --cuda-host-only -S 2>&1 \<br>
+// RUN: | FileCheck -check-prefix=HASM2 %s<br>
+// HASM2: 0: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (host-cuda)<br>
+// HASM2: 1: preprocessor, {0}, cuda-cpp-output, (host-cuda)<br>
+// HASM2: 2: compiler, {1}, ir, (host-cuda)<br>
+// HASM2: 3: offload, "host-cuda (powerpc64le-ibm-linux-gnu)" {2}, ir<br>
+// HASM2: 4: backend, {3}, assembler, (host-cuda)<br>
+<br>
+//<br>
+// Test single gpu architecture with complete compilation in device-only<br>
+// compilation mode.<br>
+//<br>
+// RUN: %clang -target powerpc64le-ibm-linux-gnu -ccc-print-phases --cuda-gpu-arch=sm_30 %s --cuda-device-only 2>&1 \<br>
+// RUN: | FileCheck -check-prefix=DBIN %s<br>
+// DBIN: 0: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (device-cuda, sm_30)<br>
+// DBIN: 1: preprocessor, {0}, cuda-cpp-output, (device-cuda, sm_30)<br>
+// DBIN: 2: compiler, {1}, ir, (device-cuda, sm_30)<br>
+// DBIN: 3: backend, {2}, assembler, (device-cuda, sm_30)<br>
+// DBIN: 4: assembler, {3}, object, (device-cuda, sm_30)<br>
+// DBIN: 5: offload, "device-cuda (nvptx64-nvidia-cuda:sm_30)" {4}, object<br>
+<br>
+//<br>
+// Test single gpu architecture up to the assemble phase in device-only<br>
+// compilation mode.<br>
+//<br>
+// RUN: %clang -target powerpc64le-ibm-linux-gnu -ccc-print-phases --cuda-gpu-arch=sm_30 %s --cuda-device-only -S 2>&1 \<br>
+// RUN: | FileCheck -check-prefix=DASM %s<br>
+// DASM: 0: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (device-cuda, sm_30)<br>
+// DASM: 1: preprocessor, {0}, cuda-cpp-output, (device-cuda, sm_30)<br>
+// DASM: 2: compiler, {1}, ir, (device-cuda, sm_30)<br>
+// DASM: 3: backend, {2}, assembler, (device-cuda, sm_30)<br>
+// DASM: 4: offload, "device-cuda (nvptx64-nvidia-cuda:sm_30)" {3}, assembler<br>
+<br>
+//<br>
+// Test two gpu architectures with complete compilation in device-only<br>
+// compilation mode.<br>
+//<br>
+// RUN: %clang -target powerpc64le-ibm-linux-gnu -ccc-print-phases --cuda-gpu-arch=sm_30 --cuda-gpu-arch=sm_35 %s --cuda-device-only 2>&1 \<br>
+// RUN: | FileCheck -check-prefix=DBIN2 %s<br>
+// DBIN2: 0: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (device-cuda, sm_30)<br>
+// DBIN2: 1: preprocessor, {0}, cuda-cpp-output, (device-cuda, sm_30)<br>
+// DBIN2: 2: compiler, {1}, ir, (device-cuda, sm_30)<br>
+// DBIN2: 3: backend, {2}, assembler, (device-cuda, sm_30)<br>
+// DBIN2: 4: assembler, {3}, object, (device-cuda, sm_30)<br>
+// DBIN2: 5: offload, "device-cuda (nvptx64-nvidia-cuda:sm_30)" {4}, object<br>
+// DBIN2: 6: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (device-cuda, sm_35)<br>
+// DBIN2: 7: preprocessor, {6}, cuda-cpp-output, (device-cuda, sm_35)<br>
+// DBIN2: 8: compiler, {7}, ir, (device-cuda, sm_35)<br>
+// DBIN2: 9: backend, {8}, assembler, (device-cuda, sm_35)<br>
+// DBIN2: 10: assembler, {9}, object, (device-cuda, sm_35)<br>
+// DBIN2: 11: offload, "device-cuda (nvptx64-nvidia-cuda:sm_35)" {10}, object<br>
+<br>
+//<br>
+// Test two gpu architectures up to the assemble phase in device-only<br>
+// compilation mode.<br>
+//<br>
+// RUN: %clang -target powerpc64le-ibm-linux-gnu -ccc-print-phases --cuda-gpu-arch=sm_30 --cuda-gpu-arch=sm_35 %s --cuda-device-only -S 2>&1 \<br>
+// RUN: | FileCheck -check-prefix=DASM2 %s<br>
+// DASM2: 0: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (device-cuda, sm_30)<br>
+// DASM2: 1: preprocessor, {0}, cuda-cpp-output, (device-cuda, sm_30)<br>
+// DASM2: 2: compiler, {1}, ir, (device-cuda, sm_30)<br>
+// DASM2: 3: backend, {2}, assembler, (device-cuda, sm_30)<br>
+// DASM2: 4: offload, "device-cuda (nvptx64-nvidia-cuda:sm_30)" {3}, assembler<br>
+// DASM2: 5: input, "{{.*}}<a href="http://cuda_phases.cu" rel="noreferrer" target="_blank">cuda_phases.cu</a>", cuda, (device-cuda, sm_35)<br>
+// DASM2: 6: preprocessor, {5}, cuda-cpp-output, (device-cuda, sm_35)<br>
+// DASM2: 7: compiler, {6}, ir, (device-cuda, sm_35)<br>
+// DASM2: 8: backend, {7}, assembler, (device-cuda, sm_35)<br>
+// DASM2: 9: offload, "device-cuda (nvptx64-nvidia-cuda:sm_35)" {8}, assembler<br>
<br>
<br>
_______________________________________________<br>
cfe-commits mailing list<br>
<a href="mailto:cfe-commits@lists.llvm.org">cfe-commits@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits</a><br>
</blockquote></div><br></div>