[PATCH] D17056: Mark all CUDA device-side function defs and decls as convergent.

Justin Lebar via cfe-commits cfe-commits at lists.llvm.org
Wed Feb 17 16:35:26 PST 2016


jlebar updated this revision to Diff 48260.
jlebar added a comment.

Move coded into SetLLVMFunctionAttributes (not ForDefinition).


http://reviews.llvm.org/D17056

Files:
  lib/CodeGen/CodeGenModule.cpp
  test/CodeGenCUDA/convergent.cu

Index: test/CodeGenCUDA/convergent.cu
===================================================================
--- /dev/null
+++ test/CodeGenCUDA/convergent.cu
@@ -0,0 +1,35 @@
+// REQUIRES: x86-registered-target
+// REQUIRES: nvptx-registered-target
+
+// RUN: %clang_cc1 -fcuda-is-device -triple nvptx-nvidia-cuda -emit-llvm \
+// RUN:   -disable-llvm-passes -o - %s | FileCheck -check-prefix DEVICE %s
+
+// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu -emit-llvm \
+// RUN:   -disable-llvm-passes -o - %s | \
+// RUN:  FileCheck -check-prefix HOST %s
+
+#include "Inputs/cuda.h"
+
+// DEVICE: Function Attrs:
+// DEVICE-SAME: convergent
+// DEVICE-NEXT: define void @_Z3foov
+__device__ void foo() {}
+
+// HOST: Function Attrs:
+// HOST-NOT: convergent
+// HOST-NEXT: define void @_Z3barv
+// DEVICE: Function Attrs:
+// DEVICE-SAME: convergent
+// DEVICE-NEXT: define void @_Z3barv
+__host__ __device__ void baz();
+__host__ __device__ void bar() { baz(); }
+
+// DEVICE: declare void @_Z3bazv() [[BAZ_ATTR:#[0-9]+]]
+// DEVICE: attributes [[BAZ_ATTR]] = {
+// DEVICE-SAME: convergent
+// DEVICE-SAME: }
+
+// HOST: declare void @_Z3bazv() [[BAZ_ATTR:#[0-9]+]]
+// HOST: attributes [[BAZ_ATTR]] = {
+// HOST-NOT: convergent
+// NOST-SAME: }
Index: lib/CodeGen/CodeGenModule.cpp
===================================================================
--- lib/CodeGen/CodeGenModule.cpp
+++ lib/CodeGen/CodeGenModule.cpp
@@ -813,6 +813,14 @@
                          false);
   F->setAttributes(llvm::AttributeSet::get(getLLVMContext(), AttributeList));
   F->setCallingConv(static_cast<llvm::CallingConv::ID>(CallingConv));
+
+  if (getLangOpts().CUDA && getLangOpts().CUDAIsDevice) {
+    // Conservatively, mark all functions in CUDA as convergent (meaning, they
+    // may call an intrinsically convergent op, such as __syncthreads(), and so
+    // can't have certain optimizations applied around them).  LLVM will remove
+    // this attribute where it safely can.
+    F->addFnAttr(llvm::Attribute::Convergent);
+  }
 }
 
 /// Determines whether the language options require us to model


-------------- next part --------------
A non-text attachment was scrubbed...
Name: D17056.48260.patch
Type: text/x-patch
Size: 2094 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/cfe-commits/attachments/20160218/fea05e46/attachment.bin>


More information about the cfe-commits mailing list