[clang] [AMDGPU] Introduce 'amdgpu_num_workgroups_{xyz}' builtin (PR #83927)

Joseph Huber via cfe-commits cfe-commits at lists.llvm.org
Mon Mar 4 15:31:58 PST 2024


https://github.com/jhuber6 created https://github.com/llvm/llvm-project/pull/83927

Summary:
The AMDGPU traget was originally designed with OpenCL in mind. The first
verisions only provided the grid size, which is the total numver of
threads in the execution context. In order to get the number of "blocks"
in the CUDA sense you then had to divide by the number of threads in the
current work group.

The switch from COV4 to COV5 changed the way these arguments are encoded
and added a new offset for the "block" size. This patch introduces code
to access this directly instead. The name was chosen at `num_workgroups`
as the OpenCL standard doesn't seem to have a good name for this concept
and calling them "blocks" is just CUDA (even though they're the same
thing).

This patch also provides support for the old COV4 format by doing the
divide of the grid and workgroup sizes. This is so we can switch over to
this in the OpenMP runtime even though it's not the officially supported
version anymore. I tested this using my libc utilities on both versions
and it functioned as expected.


>From b7090a51b8322cc1c05f5a05894fef5a56dcbcf7 Mon Sep 17 00:00:00 2001
From: Joseph Huber <huberjn at outlook.com>
Date: Mon, 4 Mar 2024 17:27:28 -0600
Subject: [PATCH] [AMDGPU] Introduce 'amdgpu_num_workgroups_{xyz}' builtin

Summary:
The AMDGPU traget was originally designed with OpenCL in mind. The first
verisions only provided the grid size, which is the total numver of
threads in the execution context. In order to get the number of "blocks"
in the CUDA sense you then had to divide by the number of threads in the
current work group.

The switch from COV4 to COV5 changed the way these arguments are encoded
and added a new offset for the "block" size. This patch introduces code
to access this directly instead. The name was chosen at `num_workgroups`
as the OpenCL standard doesn't seem to have a good name for this concept
and calling them "blocks" is just CUDA (even though they're the same
thing).

This patch also provides support for the old COV4 format by doing the
divide of the grid and workgroup sizes. This is so we can switch over to
this in the OpenMP runtime even though it's not the officially supported
version anymore. I tested this using my libc utilities on both versions
and it functioned as expected.
---
 clang/include/clang/Basic/BuiltinsAMDGPU.def |   4 +
 clang/lib/CodeGen/CGBuiltin.cpp              |  95 ++++++++--
 clang/test/CodeGen/amdgpu-abi-version.c      | 174 +++++++++++++++++--
 clang/test/CodeGenOpenCL/builtins-amdgcn.cl  |  18 ++
 4 files changed, 262 insertions(+), 29 deletions(-)

diff --git a/clang/include/clang/Basic/BuiltinsAMDGPU.def b/clang/include/clang/Basic/BuiltinsAMDGPU.def
index 213311b96df74f..43f3f500bf8056 100644
--- a/clang/include/clang/Basic/BuiltinsAMDGPU.def
+++ b/clang/include/clang/Basic/BuiltinsAMDGPU.def
@@ -42,6 +42,10 @@ BUILTIN(__builtin_amdgcn_workgroup_size_x, "Us", "nc")
 BUILTIN(__builtin_amdgcn_workgroup_size_y, "Us", "nc")
 BUILTIN(__builtin_amdgcn_workgroup_size_z, "Us", "nc")
 
+BUILTIN(__builtin_amdgcn_num_workgroups_x, "Ui", "nc")
+BUILTIN(__builtin_amdgcn_num_workgroups_y, "Ui", "nc")
+BUILTIN(__builtin_amdgcn_num_workgroups_z, "Ui", "nc")
+
 BUILTIN(__builtin_amdgcn_grid_size_x, "Ui", "nc")
 BUILTIN(__builtin_amdgcn_grid_size_y, "Ui", "nc")
 BUILTIN(__builtin_amdgcn_grid_size_z, "Ui", "nc")
diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 9ee51ca7142c77..fac1921d0f8a4a 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -17816,22 +17816,9 @@ Value *EmitAMDGPUImplicitArgPtr(CodeGenFunction &CGF) {
   return Call;
 }
 
-// \p Index is 0, 1, and 2 for x, y, and z dimension, respectively.
-/// Emit code based on Code Object ABI version.
-/// COV_4    : Emit code to use dispatch ptr
-/// COV_5+   : Emit code to use implicitarg ptr
-/// COV_NONE : Emit code to load a global variable "__oclc_ABI_version"
-///            and use its value for COV_4 or COV_5+ approach. It is used for
-///            compiling device libraries in an ABI-agnostic way.
-///
 /// Note: "__oclc_ABI_version" is supposed to be emitted and intialized by
 ///       clang during compilation of user code.
-Value *EmitAMDGPUWorkGroupSize(CodeGenFunction &CGF, unsigned Index) {
-  llvm::LoadInst *LD;
-
-  auto Cov = CGF.getTarget().getTargetOpts().CodeObjectVersion;
-
-  if (Cov == CodeObjectVersionKind::COV_None) {
+Value *getAMDGPUABIVersion(CodeGenFunction &CGF) {
     StringRef Name = "__oclc_ABI_version";
     auto *ABIVersionC = CGF.CGM.getModule().getNamedGlobal(Name);
     if (!ABIVersionC)
@@ -17844,8 +17831,24 @@ Value *EmitAMDGPUWorkGroupSize(CodeGenFunction &CGF, unsigned Index) {
     // This load will be eliminated by the IPSCCP because it is constant
     // weak_odr without externally_initialized. Either changing it to weak or
     // adding externally_initialized will keep the load.
-    Value *ABIVersion = CGF.Builder.CreateAlignedLoad(CGF.Int32Ty, ABIVersionC,
-                                                      CGF.CGM.getIntAlign());
+    return CGF.Builder.CreateAlignedLoad(CGF.Int32Ty, ABIVersionC,
+                                         CGF.CGM.getIntAlign());
+}
+
+/// Emit code based on Code Object ABI version.
+/// COV_4    : Emit code to use dispatch ptr
+/// COV_5+   : Emit code to use implicitarg ptr
+/// COV_NONE : Emit code to load a global variable "__oclc_ABI_version"
+///            and use its value for COV_4 or COV_5+ approach. It is used for
+///            compiling device libraries in an ABI-agnostic way.
+Value *EmitAMDGPUWorkGroupSize(CodeGenFunction &CGF, unsigned Index) {
+  assert(Index < 3 && "Invalid dimension argument");
+  llvm::LoadInst *LD;
+
+  auto Cov = CGF.getTarget().getTargetOpts().CodeObjectVersion;
+
+  if (Cov == CodeObjectVersionKind::COV_None) {
+    Value *ABIVersion = getAMDGPUABIVersion(CGF);
 
     Value *IsCOV5 = CGF.Builder.CreateICmpSGE(
         ABIVersion,
@@ -17901,6 +17904,58 @@ Value *EmitAMDGPUGridSize(CodeGenFunction &CGF, unsigned Index) {
                   llvm::MDNode::get(CGF.getLLVMContext(), std::nullopt));
   return LD;
 }
+
+/// Emit code based on Code Object ABI version.
+/// COV_4    : Emit code to use dispatch ptr
+/// COV_5+   : Emit code to use implicitarg ptr
+/// COV_NONE : Emit code to load a global variable "__oclc_ABI_version"
+///            and use its value for COV_4 or COV_5+ approach. It is used for
+///            compiling device libraries in an ABI-agnostic way.
+Value *EmitAMDGPUNumWorkGroups(CodeGenFunction &CGF, unsigned Index) {
+  assert(Index < 3 && "Invalid dimension argument");
+  llvm::Instruction *I;
+
+  auto Cov = CGF.getTarget().getTargetOpts().CodeObjectVersion;
+
+  // Indexing using the implicit kernel arguments.
+  auto EmitCOV5 = [](CodeGenFunction &CGF, unsigned Index) -> llvm::Value * {
+    llvm::Value *ImplicitGEP = CGF.Builder.CreateConstGEP1_32(
+        CGF.Int8Ty, EmitAMDGPUImplicitArgPtr(CGF), Index * sizeof(uint32_t));
+    llvm::LoadInst *LD = CGF.Builder.CreateLoad(
+        Address(ImplicitGEP, CGF.Int32Ty, CharUnits::fromQuantity(2)));
+    LD->setMetadata(llvm::LLVMContext::MD_invariant_load,
+                    llvm::MDNode::get(CGF.getLLVMContext(), std::nullopt));
+    LD->setMetadata(llvm::LLVMContext::MD_noundef,
+                    llvm::MDNode::get(CGF.getLLVMContext(), std::nullopt));
+    return LD;
+  };
+
+  // Indexing into the packet arguments and dividing the grid size.
+  auto EmitCOV4 = [](CodeGenFunction &CGF, unsigned Index) -> llvm::Value * {
+    auto GridSize = EmitAMDGPUGridSize(CGF, Index);
+    auto WorkGroupSize = EmitAMDGPUWorkGroupSize(CGF, Index);
+    return CGF.Builder.CreateUDiv(
+        GridSize, CGF.Builder.CreateZExt(WorkGroupSize, GridSize->getType()));
+  };
+
+  if (Cov == CodeObjectVersionKind::COV_None) {
+    Value *ABIVersion = getAMDGPUABIVersion(CGF);
+
+    Value *ImplicitGEP = EmitCOV5(CGF, Index);
+
+    Value *IsCOV5 = CGF.Builder.CreateICmpSGE(
+        ABIVersion,
+        llvm::ConstantInt::get(CGF.Int32Ty, CodeObjectVersionKind::COV_5));
+
+    Value *DispatchGEP = EmitCOV4(CGF, Index);
+
+    return CGF.Builder.CreateSelect(IsCOV5, ImplicitGEP, DispatchGEP);
+  }
+
+  return Cov >= CodeObjectVersionKind::COV_5 ? EmitCOV5(CGF, Index)
+                                             : EmitCOV4(CGF, Index);
+}
+
 } // namespace
 
 // For processing memory ordering and memory scope arguments of various
@@ -18697,6 +18752,14 @@ Value *CodeGenFunction::EmitAMDGPUBuiltinExpr(unsigned BuiltinID,
   case AMDGPU::BI__builtin_amdgcn_workgroup_size_z:
     return EmitAMDGPUWorkGroupSize(*this, 2);
 
+  // amdgcn num workgroups
+  case AMDGPU::BI__builtin_amdgcn_num_workgroups_x:
+    return EmitAMDGPUNumWorkGroups(*this, 0);
+  case AMDGPU::BI__builtin_amdgcn_num_workgroups_y:
+    return EmitAMDGPUNumWorkGroups(*this, 1);
+  case AMDGPU::BI__builtin_amdgcn_num_workgroups_z:
+    return EmitAMDGPUNumWorkGroups(*this, 2);
+
   // amdgcn grid size
   case AMDGPU::BI__builtin_amdgcn_grid_size_x:
     return EmitAMDGPUGridSize(*this, 0);
diff --git a/clang/test/CodeGen/amdgpu-abi-version.c b/clang/test/CodeGen/amdgpu-abi-version.c
index 4e5ad87655f230..e83a2e0c6540bc 100644
--- a/clang/test/CodeGen/amdgpu-abi-version.c
+++ b/clang/test/CodeGen/amdgpu-abi-version.c
@@ -1,23 +1,171 @@
-// NOTE: Assertions have been autogenerated by utils/update_cc_test_checks.py UTC_ARGS: --check-globals --version 3
+// NOTE: Assertions have been autogenerated by utils/update_cc_test_checks.py UTC_ARGS: --check-globals all --version 4
 // RUN: %clang_cc1 -cc1 -triple amdgcn-amd-amdhsa -emit-llvm -mcode-object-version=none %s -o - | FileCheck %s
 
 //.
 // CHECK: @__oclc_ABI_version = external addrspace(4) global i32
 //.
-// CHECK-LABEL: define dso_local i32 @foo(
-// CHECK-SAME: ) #[[ATTR0:[0-9]+]] {
+// CHECK-LABEL: define dso_local i32 @workgroup_size(
+// CHECK-SAME: i32 noundef [[DIM:%.*]]) #[[ATTR0:[0-9]+]] {
 // CHECK-NEXT:  entry:
 // CHECK-NEXT:    [[RETVAL:%.*]] = alloca i32, align 4, addrspace(5)
+// CHECK-NEXT:    [[DIM_ADDR:%.*]] = alloca i32, align 4, addrspace(5)
 // CHECK-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// CHECK-NEXT:    [[TMP0:%.*]] = load i32, ptr addrspace(4) @__oclc_ABI_version, align 4
-// CHECK-NEXT:    [[TMP1:%.*]] = icmp sge i32 [[TMP0]], 500
+// CHECK-NEXT:    [[DIM_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DIM_ADDR]] to ptr
+// CHECK-NEXT:    store i32 [[DIM]], ptr [[DIM_ADDR_ASCAST]], align 4
+// CHECK-NEXT:    [[TMP0:%.*]] = load i32, ptr [[DIM_ADDR_ASCAST]], align 4
+// CHECK-NEXT:    switch i32 [[TMP0]], label [[SW_EPILOG:%.*]] [
+// CHECK-NEXT:      i32 0, label [[SW_BB:%.*]]
+// CHECK-NEXT:      i32 1, label [[SW_BB1:%.*]]
+// CHECK-NEXT:      i32 2, label [[SW_BB3:%.*]]
+// CHECK-NEXT:    ]
+// CHECK:       sw.bb:
+// CHECK-NEXT:    [[TMP1:%.*]] = load i32, ptr addrspace(4) @__oclc_ABI_version, align 4
+// CHECK-NEXT:    [[TMP2:%.*]] = icmp sge i32 [[TMP1]], 500
+// CHECK-NEXT:    [[TMP3:%.*]] = call align 8 dereferenceable(256) ptr addrspace(4) @llvm.amdgcn.implicitarg.ptr()
+// CHECK-NEXT:    [[TMP4:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP3]], i32 12
+// CHECK-NEXT:    [[TMP5:%.*]] = call align 4 dereferenceable(64) ptr addrspace(4) @llvm.amdgcn.dispatch.ptr()
+// CHECK-NEXT:    [[TMP6:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP5]], i32 4
+// CHECK-NEXT:    [[TMP7:%.*]] = select i1 [[TMP2]], ptr addrspace(4) [[TMP4]], ptr addrspace(4) [[TMP6]]
+// CHECK-NEXT:    [[TMP8:%.*]] = load i16, ptr addrspace(4) [[TMP7]], align 2, !range [[RNG2:![0-9]+]], !invariant.load [[META3:![0-9]+]], !noundef [[META3]]
+// CHECK-NEXT:    [[CONV:%.*]] = zext i16 [[TMP8]] to i32
+// CHECK-NEXT:    store i32 [[CONV]], ptr [[RETVAL_ASCAST]], align 4
+// CHECK-NEXT:    br label [[RETURN:%.*]]
+// CHECK:       sw.bb1:
+// CHECK-NEXT:    [[TMP9:%.*]] = load i32, ptr addrspace(4) @__oclc_ABI_version, align 4
+// CHECK-NEXT:    [[TMP10:%.*]] = icmp sge i32 [[TMP9]], 500
+// CHECK-NEXT:    [[TMP11:%.*]] = call align 8 dereferenceable(256) ptr addrspace(4) @llvm.amdgcn.implicitarg.ptr()
+// CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP11]], i32 14
+// CHECK-NEXT:    [[TMP13:%.*]] = call align 4 dereferenceable(64) ptr addrspace(4) @llvm.amdgcn.dispatch.ptr()
+// CHECK-NEXT:    [[TMP14:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP13]], i32 6
+// CHECK-NEXT:    [[TMP15:%.*]] = select i1 [[TMP10]], ptr addrspace(4) [[TMP12]], ptr addrspace(4) [[TMP14]]
+// CHECK-NEXT:    [[TMP16:%.*]] = load i16, ptr addrspace(4) [[TMP15]], align 2, !range [[RNG2]], !invariant.load [[META3]], !noundef [[META3]]
+// CHECK-NEXT:    [[CONV2:%.*]] = zext i16 [[TMP16]] to i32
+// CHECK-NEXT:    store i32 [[CONV2]], ptr [[RETVAL_ASCAST]], align 4
+// CHECK-NEXT:    br label [[RETURN]]
+// CHECK:       sw.bb3:
+// CHECK-NEXT:    [[TMP17:%.*]] = load i32, ptr addrspace(4) @__oclc_ABI_version, align 4
+// CHECK-NEXT:    [[TMP18:%.*]] = icmp sge i32 [[TMP17]], 500
+// CHECK-NEXT:    [[TMP19:%.*]] = call align 8 dereferenceable(256) ptr addrspace(4) @llvm.amdgcn.implicitarg.ptr()
+// CHECK-NEXT:    [[TMP20:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP19]], i32 16
+// CHECK-NEXT:    [[TMP21:%.*]] = call align 4 dereferenceable(64) ptr addrspace(4) @llvm.amdgcn.dispatch.ptr()
+// CHECK-NEXT:    [[TMP22:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP21]], i32 8
+// CHECK-NEXT:    [[TMP23:%.*]] = select i1 [[TMP18]], ptr addrspace(4) [[TMP20]], ptr addrspace(4) [[TMP22]]
+// CHECK-NEXT:    [[TMP24:%.*]] = load i16, ptr addrspace(4) [[TMP23]], align 2, !range [[RNG2]], !invariant.load [[META3]], !noundef [[META3]]
+// CHECK-NEXT:    [[CONV4:%.*]] = zext i16 [[TMP24]] to i32
+// CHECK-NEXT:    store i32 [[CONV4]], ptr [[RETVAL_ASCAST]], align 4
+// CHECK-NEXT:    br label [[RETURN]]
+// CHECK:       sw.epilog:
+// CHECK-NEXT:    unreachable
+// CHECK:       return:
+// CHECK-NEXT:    [[TMP25:%.*]] = load i32, ptr [[RETVAL_ASCAST]], align 4
+// CHECK-NEXT:    ret i32 [[TMP25]]
+//
+int workgroup_size(int dim) {
+  switch(dim) {
+    case 0:
+      return __builtin_amdgcn_workgroup_size_x();
+    case 1:
+      return __builtin_amdgcn_workgroup_size_y();
+    case 2:
+      return __builtin_amdgcn_workgroup_size_z();
+  }
+  __builtin_unreachable();
+}
+
+// CHECK-LABEL: define dso_local i32 @num_workgroups(
+// CHECK-SAME: i32 noundef [[DIM:%.*]]) #[[ATTR0]] {
+// CHECK-NEXT:  entry:
+// CHECK-NEXT:    [[RETVAL:%.*]] = alloca i32, align 4, addrspace(5)
+// CHECK-NEXT:    [[DIM_ADDR:%.*]] = alloca i32, align 4, addrspace(5)
+// CHECK-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
+// CHECK-NEXT:    [[DIM_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DIM_ADDR]] to ptr
+// CHECK-NEXT:    store i32 [[DIM]], ptr [[DIM_ADDR_ASCAST]], align 4
+// CHECK-NEXT:    [[TMP0:%.*]] = load i32, ptr [[DIM_ADDR_ASCAST]], align 4
+// CHECK-NEXT:    switch i32 [[TMP0]], label [[SW_EPILOG:%.*]] [
+// CHECK-NEXT:      i32 0, label [[SW_BB:%.*]]
+// CHECK-NEXT:      i32 1, label [[SW_BB1:%.*]]
+// CHECK-NEXT:      i32 2, label [[SW_BB2:%.*]]
+// CHECK-NEXT:    ]
+// CHECK:       sw.bb:
+// CHECK-NEXT:    [[TMP1:%.*]] = load i32, ptr addrspace(4) @__oclc_ABI_version, align 4
 // CHECK-NEXT:    [[TMP2:%.*]] = call align 8 dereferenceable(256) ptr addrspace(4) @llvm.amdgcn.implicitarg.ptr()
-// CHECK-NEXT:    [[TMP3:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP2]], i32 12
-// CHECK-NEXT:    [[TMP4:%.*]] = call align 4 dereferenceable(64) ptr addrspace(4) @llvm.amdgcn.dispatch.ptr()
-// CHECK-NEXT:    [[TMP5:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP4]], i32 4
-// CHECK-NEXT:    [[TMP6:%.*]] = select i1 [[TMP1]], ptr addrspace(4) [[TMP3]], ptr addrspace(4) [[TMP5]]
-// CHECK-NEXT:    [[TMP7:%.*]] = load i16, ptr addrspace(4) [[TMP6]], align 2, !range [[RNG2:![0-9]+]], !invariant.load !3, !noundef !3
-// CHECK-NEXT:    [[CONV:%.*]] = zext i16 [[TMP7]] to i32
-// CHECK-NEXT:    ret i32 [[CONV]]
+// CHECK-NEXT:    [[TMP3:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP2]], i32 0
+// CHECK-NEXT:    [[TMP4:%.*]] = load i32, ptr addrspace(4) [[TMP3]], align 2, !invariant.load [[META3]], !noundef [[META3]]
+// CHECK-NEXT:    [[TMP5:%.*]] = icmp sge i32 [[TMP1]], 500
+// CHECK-NEXT:    [[TMP6:%.*]] = call align 4 dereferenceable(64) ptr addrspace(4) @llvm.amdgcn.dispatch.ptr()
+// CHECK-NEXT:    [[TMP7:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP6]], i32 12
+// CHECK-NEXT:    [[TMP8:%.*]] = load i32, ptr addrspace(4) [[TMP7]], align 4, !invariant.load [[META3]]
+// CHECK-NEXT:    [[TMP9:%.*]] = load i32, ptr addrspace(4) @__oclc_ABI_version, align 4
+// CHECK-NEXT:    [[TMP10:%.*]] = icmp sge i32 [[TMP9]], 500
+// CHECK-NEXT:    [[TMP11:%.*]] = call align 8 dereferenceable(256) ptr addrspace(4) @llvm.amdgcn.implicitarg.ptr()
+// CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP11]], i32 12
+// CHECK-NEXT:    [[TMP13:%.*]] = call align 4 dereferenceable(64) ptr addrspace(4) @llvm.amdgcn.dispatch.ptr()
+// CHECK-NEXT:    [[TMP14:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP13]], i32 4
+// CHECK-NEXT:    [[TMP15:%.*]] = select i1 [[TMP10]], ptr addrspace(4) [[TMP12]], ptr addrspace(4) [[TMP14]]
+// CHECK-NEXT:    [[TMP16:%.*]] = load i16, ptr addrspace(4) [[TMP15]], align 2, !range [[RNG2]], !invariant.load [[META3]], !noundef [[META3]]
+// CHECK-NEXT:    [[TMP17:%.*]] = zext i16 [[TMP16]] to i32
+// CHECK-NEXT:    [[TMP18:%.*]] = udiv i32 [[TMP8]], [[TMP17]]
+// CHECK-NEXT:    [[TMP19:%.*]] = select i1 [[TMP5]], i32 [[TMP4]], i32 [[TMP18]]
+// CHECK-NEXT:    store i32 [[TMP19]], ptr [[RETVAL_ASCAST]], align 4
+// CHECK-NEXT:    br label [[RETURN:%.*]]
+// CHECK:       sw.bb1:
+// CHECK-NEXT:    [[TMP20:%.*]] = load i32, ptr addrspace(4) @__oclc_ABI_version, align 4
+// CHECK-NEXT:    [[TMP21:%.*]] = call align 8 dereferenceable(256) ptr addrspace(4) @llvm.amdgcn.implicitarg.ptr()
+// CHECK-NEXT:    [[TMP22:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP21]], i32 4
+// CHECK-NEXT:    [[TMP23:%.*]] = load i32, ptr addrspace(4) [[TMP22]], align 2, !invariant.load [[META3]], !noundef [[META3]]
+// CHECK-NEXT:    [[TMP24:%.*]] = icmp sge i32 [[TMP20]], 500
+// CHECK-NEXT:    [[TMP25:%.*]] = call align 4 dereferenceable(64) ptr addrspace(4) @llvm.amdgcn.dispatch.ptr()
+// CHECK-NEXT:    [[TMP26:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP25]], i32 16
+// CHECK-NEXT:    [[TMP27:%.*]] = load i32, ptr addrspace(4) [[TMP26]], align 4, !invariant.load [[META3]]
+// CHECK-NEXT:    [[TMP28:%.*]] = load i32, ptr addrspace(4) @__oclc_ABI_version, align 4
+// CHECK-NEXT:    [[TMP29:%.*]] = icmp sge i32 [[TMP28]], 500
+// CHECK-NEXT:    [[TMP30:%.*]] = call align 8 dereferenceable(256) ptr addrspace(4) @llvm.amdgcn.implicitarg.ptr()
+// CHECK-NEXT:    [[TMP31:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP30]], i32 14
+// CHECK-NEXT:    [[TMP32:%.*]] = call align 4 dereferenceable(64) ptr addrspace(4) @llvm.amdgcn.dispatch.ptr()
+// CHECK-NEXT:    [[TMP33:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP32]], i32 6
+// CHECK-NEXT:    [[TMP34:%.*]] = select i1 [[TMP29]], ptr addrspace(4) [[TMP31]], ptr addrspace(4) [[TMP33]]
+// CHECK-NEXT:    [[TMP35:%.*]] = load i16, ptr addrspace(4) [[TMP34]], align 2, !range [[RNG2]], !invariant.load [[META3]], !noundef [[META3]]
+// CHECK-NEXT:    [[TMP36:%.*]] = zext i16 [[TMP35]] to i32
+// CHECK-NEXT:    [[TMP37:%.*]] = udiv i32 [[TMP27]], [[TMP36]]
+// CHECK-NEXT:    [[TMP38:%.*]] = select i1 [[TMP24]], i32 [[TMP23]], i32 [[TMP37]]
+// CHECK-NEXT:    store i32 [[TMP38]], ptr [[RETVAL_ASCAST]], align 4
+// CHECK-NEXT:    br label [[RETURN]]
+// CHECK:       sw.bb2:
+// CHECK-NEXT:    [[TMP39:%.*]] = load i32, ptr addrspace(4) @__oclc_ABI_version, align 4
+// CHECK-NEXT:    [[TMP40:%.*]] = call align 8 dereferenceable(256) ptr addrspace(4) @llvm.amdgcn.implicitarg.ptr()
+// CHECK-NEXT:    [[TMP41:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP40]], i32 8
+// CHECK-NEXT:    [[TMP42:%.*]] = load i32, ptr addrspace(4) [[TMP41]], align 2, !invariant.load [[META3]], !noundef [[META3]]
+// CHECK-NEXT:    [[TMP43:%.*]] = icmp sge i32 [[TMP39]], 500
+// CHECK-NEXT:    [[TMP44:%.*]] = call align 4 dereferenceable(64) ptr addrspace(4) @llvm.amdgcn.dispatch.ptr()
+// CHECK-NEXT:    [[TMP45:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP44]], i32 20
+// CHECK-NEXT:    [[TMP46:%.*]] = load i32, ptr addrspace(4) [[TMP45]], align 4, !invariant.load [[META3]]
+// CHECK-NEXT:    [[TMP47:%.*]] = load i32, ptr addrspace(4) @__oclc_ABI_version, align 4
+// CHECK-NEXT:    [[TMP48:%.*]] = icmp sge i32 [[TMP47]], 500
+// CHECK-NEXT:    [[TMP49:%.*]] = call align 8 dereferenceable(256) ptr addrspace(4) @llvm.amdgcn.implicitarg.ptr()
+// CHECK-NEXT:    [[TMP50:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP49]], i32 16
+// CHECK-NEXT:    [[TMP51:%.*]] = call align 4 dereferenceable(64) ptr addrspace(4) @llvm.amdgcn.dispatch.ptr()
+// CHECK-NEXT:    [[TMP52:%.*]] = getelementptr i8, ptr addrspace(4) [[TMP51]], i32 8
+// CHECK-NEXT:    [[TMP53:%.*]] = select i1 [[TMP48]], ptr addrspace(4) [[TMP50]], ptr addrspace(4) [[TMP52]]
+// CHECK-NEXT:    [[TMP54:%.*]] = load i16, ptr addrspace(4) [[TMP53]], align 2, !range [[RNG2]], !invariant.load [[META3]], !noundef [[META3]]
+// CHECK-NEXT:    [[TMP55:%.*]] = zext i16 [[TMP54]] to i32
+// CHECK-NEXT:    [[TMP56:%.*]] = udiv i32 [[TMP46]], [[TMP55]]
+// CHECK-NEXT:    [[TMP57:%.*]] = select i1 [[TMP43]], i32 [[TMP42]], i32 [[TMP56]]
+// CHECK-NEXT:    store i32 [[TMP57]], ptr [[RETVAL_ASCAST]], align 4
+// CHECK-NEXT:    br label [[RETURN]]
+// CHECK:       sw.epilog:
+// CHECK-NEXT:    unreachable
+// CHECK:       return:
+// CHECK-NEXT:    [[TMP58:%.*]] = load i32, ptr [[RETVAL_ASCAST]], align 4
+// CHECK-NEXT:    ret i32 [[TMP58]]
 //
-int foo() { return __builtin_amdgcn_workgroup_size_x(); }
+int num_workgroups(int dim) {
+  switch(dim) {
+    case 0:
+      return __builtin_amdgcn_num_workgroups_x();
+    case 1:
+      return __builtin_amdgcn_num_workgroups_y();
+    case 2:
+      return __builtin_amdgcn_num_workgroups_z();
+  }
+  __builtin_unreachable();
+}
diff --git a/clang/test/CodeGenOpenCL/builtins-amdgcn.cl b/clang/test/CodeGenOpenCL/builtins-amdgcn.cl
index 7d9010ee9067d6..224ad7ea53e22e 100644
--- a/clang/test/CodeGenOpenCL/builtins-amdgcn.cl
+++ b/clang/test/CodeGenOpenCL/builtins-amdgcn.cl
@@ -618,6 +618,24 @@ void test_get_workgroup_size(int d, global int *out)
 	}
 }
 
+// CHECK-LABEL: @test_get_num_workgroups(
+// CHECK: call align 8 dereferenceable(256) ptr addrspace(4) @llvm.amdgcn.implicitarg.ptr()
+// CHECK: load i32, ptr addrspace(4) %{{.*}} align 8, !invariant.load{{.*}}, !noundef
+// CHECK: tail call align 8 dereferenceable(256) ptr addrspace(4) @llvm.amdgcn.implicitarg.ptr()
+// CHECK: getelementptr inbounds i8, ptr addrspace(4) %{{.*}} i64 4
+// CHECK: load i32, ptr addrspace(4) %{{.*}} align 4, !invariant.load{{.*}}, !noundef
+// CHECK: tail call align 8 dereferenceable(256) ptr addrspace(4) @llvm.amdgcn.implicitarg.ptr()
+// CHECK: getelementptr inbounds i8, ptr addrspace(4) %{{.*}} i64 8
+// CHECK: load i32, ptr addrspace(4) %{{.*}} align 8, !invariant.load{{.*}}, !noundef
+void test_get_num_workgroups(int d, int *out) {
+  switch (d) {
+  case 0: *out = __builtin_amdgcn_num_workgroups_x(); break;
+  case 1: *out = __builtin_amdgcn_num_workgroups_y(); break;
+  case 2: *out = __builtin_amdgcn_num_workgroups_z(); break;
+  default: *out = 0;
+  }
+}
+
 // CHECK-LABEL: @test_get_grid_size(
 // CHECK: call align 4 dereferenceable(64) ptr addrspace(4) @llvm.amdgcn.dispatch.ptr()
 // CHECK: getelementptr inbounds i8, ptr addrspace(4) %{{.*}}, i64 12



More information about the cfe-commits mailing list