[PATCH] D79754: [OpenMP][AMDGCN] Support OpenMP offloading for AMDGCN architecture - Part 1

Sameer Sahasrabuddhe via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri May 15 09:45:07 PDT 2020


sameerds added inline comments.


================
Comment at: clang/include/clang/Basic/TargetInfo.h:1261
+  /// Currently only supports NVPTX and AMDGCN
+  static bool isOpenMPGPU(llvm::Triple &T) {
+    return T.isNVPTX() || T.isAMDGCN();
----------------
How is "OpenMP-compatible GPU" defined? I think it's too early to start designing predicates about whether a target is a GPU and whether it supports OpenMP.


================
Comment at: clang/lib/AST/Decl.cpp:3221
+        !hasAttr<CUDAHostAttr>()) ||
+       Context.getTargetInfo().getTriple().isAMDGCN()) &&
       !(BuiltinID == Builtin::BIprintf || BuiltinID == Builtin::BImalloc))
----------------
This seems awkward to me. Why mix it up with only CUDA and HIP? The earlier factoring is better, where CUDA/HIP took care of their own business, and the catch-all case of AMDGCN was a separate clause by itself. It doesn't matter that the builtins being checked for AMDGCN on OpenMP are //currently// identical to CUDA/HIP. When this situation later changes (I am sure OpenMP will support more builtins), we will have to split it out again anyway.


================
Comment at: clang/lib/Frontend/CompilerInvocation.cpp:3100
 
+  bool IsOpenMPGPU = clang::TargetInfo::isOpenMPGPU(T);
+
----------------
I am not particularly in favour of introducing a variable just because it looks smaller than a call at each appropriate location. If you really want it this way, at least make it a const.


================
Comment at: clang/lib/Frontend/CompilerInvocation.cpp:3104
   // handling code for those requiring so.
-  if ((Opts.OpenMPIsDevice && T.isNVPTX()) || Opts.OpenCLCPlusPlus) {
+  if ((Opts.OpenMPIsDevice && IsOpenMPGPU) || Opts.OpenCLCPlusPlus) {
     Opts.Exceptions = 0;
----------------
Looking at the comment before this line, the correct predicate would "target supports exceptions with OpenMP". Is it always true that every GPU that supports OpenMP will not support exception handling? I would recommend just checking individual targets for now instead of inventing predicates.


================
Comment at: clang/lib/Frontend/CompilerInvocation.cpp:3157
+  // Set CUDA mode for OpenMP target NVPTX/AMDGCN if specified in options
+  Opts.OpenMPCUDAMode = Opts.OpenMPIsDevice && IsOpenMPGPU &&
                         Args.hasArg(options::OPT_fopenmp_cuda_mode);
----------------
Is there any reason to believe that every future GPU added to this predicate will also want the CUDA mode set? I would recommend using individual targets for now instead of inventing a new predicate.


================
Comment at: clang/lib/Frontend/CompilerInvocation.cpp:3162
   Opts.OpenMPCUDAForceFullRuntime =
-      Opts.OpenMPIsDevice && T.isNVPTX() &&
+      Opts.OpenMPIsDevice && IsOpenMPGPU &&
       Args.hasArg(options::OPT_fopenmp_cuda_force_full_runtime);
----------------
Same doubt about this use of an artificial predicate as commented earlier.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D79754/new/

https://reviews.llvm.org/D79754





More information about the llvm-commits mailing list