[llvm] [OpenMP][GPU][FIX] Enable generic barriers in single threaded contexts (PR #140786)

Johannes Doerfert via llvm-commits llvm-commits at lists.llvm.org
Tue May 20 12:09:24 PDT 2025


https://github.com/jdoerfert created https://github.com/llvm/llvm-project/pull/140786

The generic GPU barrier implementation checked if it was the main thread in generic mode to identify single threaded regions. This doesn't work since inside of a non-active (=sequential) parallel, that thread becomes the main thread of a team, and is not the main thread in generic mode. At least that is the implementation of the APIs today.

To identify single threaded regions we now check the team size explicitly.

This exposed three other issues; one is, for now, expected and not a bug, the second one is a bug and has a FIXME in the
single_threaded_for_barrier_hang_1.c file, and the final one is also benign as described in the end.

The non-bug issue comes up if we ever initialize a thread state. Afterwards we will never run any region in parallel. This is a little conservative, but I guess thread states are really bad for performance anyway.

The bug comes up if we optimize single_threaded_for_barrier_hang_1 and execute it in Generic-SPMD mode. For some reason we loose all the updates to b. This looks very much like a compiler bug, but could also be another logic issue in the runtime. Needs to be investigated.

Issue number 3 comes up if we have nested parallels inside of a target region. The clang SPMD-check logic gets confused, determines SPMD (which is fine) but picks an unreasonable thread count. This is all benign, I think, just weird:

```
  #pragma omp target teams
  #pragma omp parallel num_threads(64)
  #pragma omp parallel num_threads(10)
  {}
```
Was launched with 10 threads, not 64.

>From 6673f2cbf84abaf1bfa625926a2541465e082e79 Mon Sep 17 00:00:00 2001
From: Johannes Doerfert <johannes at jdoerfert.de>
Date: Tue, 20 May 2025 10:25:26 -0700
Subject: [PATCH] [OpenMP][GPU][FIX] Enable generic barriers in single threaded
 contexts

The generic GPU barrier implementation checked if it was the main thread
in generic mode to identify single threaded regions. This doesn't work
since inside of a non-active (=sequential) parallel, that thread becomes
the main thread of a team, and is not the main thread in generic mode.
At least that is the implementation of the APIs today.

To identify single threaded regions we now check the team size
explicitly.

This exposed three other issues; one is, for now, expected and not a bug,
the second one is a bug and has a FIXME in the
single_threaded_for_barrier_hang_1.c file, and the final one is also
benign as described in the end.

The non-bug issue comes up if we ever initialize a thread state.
Afterwards we will never run any region in parallel. This is a little
conservative, but I guess thread states are really bad for performance
anyway.

The bug comes up if we optimize single_threaded_for_barrier_hang_1 and
execute it in Generic-SPMD mode. For some reason we loose all the
updates to b. This looks very much like a compiler bug, but could also
be another logic issue in the runtime. Needs to be investigated.

Issue number 3 comes up if we have nested parallels inside of a target
region. The clang SPMD-check logic gets confused, determines SPMD (which
is fine) but picks an unreasonable thread count. This is all benign, I
think, just weird:

```
  #pragma omp target teams
  #pragma omp parallel num_threads(64)
  #pragma omp parallel num_threads(10)
  {}
```

Was launched with 10 threads, not 64.
---
 offload/DeviceRTL/src/Synchronization.cpp     |  8 ++++---
 .../single_threaded_for_barrier_hang_1.c      | 21 +++++++++++++++++
 .../single_threaded_for_barrier_hang_2.c      | 23 +++++++++++++++++++
 3 files changed, 49 insertions(+), 3 deletions(-)
 create mode 100644 offload/test/offloading/single_threaded_for_barrier_hang_1.c
 create mode 100644 offload/test/offloading/single_threaded_for_barrier_hang_2.c

diff --git a/offload/DeviceRTL/src/Synchronization.cpp b/offload/DeviceRTL/src/Synchronization.cpp
index 0854c21ee152a..2f1ed34a3f6d6 100644
--- a/offload/DeviceRTL/src/Synchronization.cpp
+++ b/offload/DeviceRTL/src/Synchronization.cpp
@@ -303,12 +303,14 @@ int32_t __kmpc_cancel_barrier(IdentTy *Loc, int32_t TId) {
 }
 
 void __kmpc_barrier(IdentTy *Loc, int32_t TId) {
-  if (mapping::isMainThreadInGenericMode())
-    return __kmpc_flush(Loc);
-
   if (mapping::isSPMDMode())
     return __kmpc_barrier_simple_spmd(Loc, TId);
 
+  // Generic parallel regions are run with multiple of the warp size or single
+  // threaded, in the latter case we need to stop here.
+  if (omp_get_num_threads() == 1)
+    return __kmpc_flush(Loc);
+
   impl::namedBarrier();
 }
 
diff --git a/offload/test/offloading/single_threaded_for_barrier_hang_1.c b/offload/test/offloading/single_threaded_for_barrier_hang_1.c
new file mode 100644
index 0000000000000..8ee6b51fb6818
--- /dev/null
+++ b/offload/test/offloading/single_threaded_for_barrier_hang_1.c
@@ -0,0 +1,21 @@
+// RUN: %libomptarget-compile-run-and-check-generic
+// RUN: %libomptarget-compileopt-run-and-check-generic
+
+#include <omp.h>
+#include <stdio.h>
+
+int main() {
+  int b = 0;
+
+#pragma omp target map(tofrom : b)
+  for (int i = 1; i <= 10; ++i) {
+#pragma omp parallel num_threads(10) reduction(+ : b)
+#pragma omp for
+    for (int k = 0; k < 10; ++k)
+      ++b;
+  }
+
+  // CHECK: b: 100
+  printf("b: %i\n", b);
+  return 0;
+}
diff --git a/offload/test/offloading/single_threaded_for_barrier_hang_2.c b/offload/test/offloading/single_threaded_for_barrier_hang_2.c
new file mode 100644
index 0000000000000..a98abd6922da7
--- /dev/null
+++ b/offload/test/offloading/single_threaded_for_barrier_hang_2.c
@@ -0,0 +1,23 @@
+// RUN: %libomptarget-compile-run-and-check-generic
+// FIXME: This fails with optimization enabled and prints b: 0
+// FIXME: RUN: %libomptarget-compileopt-run-and-check-generic
+
+#include <omp.h>
+#include <stdio.h>
+
+int main() {
+  int b = 0;
+
+#pragma omp target map(tofrom : b) thread_limit(256)
+  for (int i = 1; i <= 1; ++i) {
+#pragma omp parallel num_threads(64) reduction(+ : b)
+#pragma omp parallel num_threads(10) reduction(+ : b)
+#pragma omp for
+    for (int k = 0; k < 10; ++k)
+      ++b;
+  }
+
+  // CHECK: b: 640
+  printf("b: %i\n", b);
+  return 0;
+}



More information about the llvm-commits mailing list