[PATCH] D12917: [CUDA] Allow parsing of host and device code simultaneously.

Artem Belevich via cfe-commits cfe-commits at lists.llvm.org
Wed Sep 16 17:22:22 PDT 2015


tra created this revision.
tra added reviewers: echristo, eliben, jpienaar.
tra added a subscriber: cfe-commits.
Herald added subscribers: dschuff, jfb.

The patch:
  - adds -aux-triple option to specify target triple
  - propagates aux target info to AST context and Preprocessor
  - pulls in target specific preprocessor macros.
  - pulls in target-specific builtins from aux target.
  - sets appropriate __host__ or __device__ attribute on builtins.

Rationale:

  In order to compile CUDA source with mixed host/device code without physically separating host and device code, we need to be able to parse code that may contain target-specific constructs from both host and device targets. 

During device-side compilation we need to be able to include any host includes we may encounter. Similarly, during host compilation, we do need to include device-side includes that may be required for device-specific code in the file to parse. In both cases, we need to fake target environment well enough for the headers to work (I.e. x86 host's headers want to see __amd64__ or __i386__ defined, CUDA includes are looking for NVPTX-specific macros). 

We also need to be able to parse target-specific builtins from both host and device targets in the same TU.

Generally speaking it's not possible to achieve this in all cases. Fortunately, CUDA's case is simpler and proposed patch works pretty well in practice:

  - clang already implements attribute-based function overloading which allows avoiding name clashes between host and device functions.
  - basic host and device types are expected to match, so a lot of type-related predefined macros from host and device targets have the same value.
  - host includes (x86 on linux) do not use predefined NVPTX-specific macros, so including them is not a problem.
  - builtins from the aux target are only used for parsing and AST construction. CUDA never generates IR for them. If a builtin is used from a wrong context, it will violate calling restriction and produce an error. 
  - this change includes *all* builtins from the aux target which provides us with a superset of builtins that would be available on the opposite side of the compilation which will allow creating different ASTs on host and device sides. IMO it's similar to already-existing issue of diverging host/device code caused by common (ab)use of __CUDA_ARCH__ macro.



http://reviews.llvm.org/D12917

Files:
  include/clang/AST/ASTContext.h
  include/clang/Basic/Builtins.h
  include/clang/Driver/CC1Options.td
  include/clang/Frontend/CompilerInstance.h
  include/clang/Frontend/FrontendOptions.h
  include/clang/Lex/Preprocessor.h
  lib/AST/ASTContext.cpp
  lib/Basic/Builtins.cpp
  lib/CodeGen/CGBuiltin.cpp
  lib/Frontend/CompilerInstance.cpp
  lib/Frontend/CompilerInvocation.cpp
  lib/Frontend/InitPreprocessor.cpp
  lib/Lex/Preprocessor.cpp
  lib/Sema/SemaDecl.cpp
  test/SemaCUDA/builtins.cu

-------------- next part --------------
A non-text attachment was scrubbed...
Name: D12917.34940.patch
Type: text/x-patch
Size: 21596 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/cfe-commits/attachments/20150917/b6f8e005/attachment-0001.bin>


More information about the cfe-commits mailing list