[Openmp-commits] [PATCH] D39439: [OpenMP]Fix race condition in omp_init_lock

Adam Azarchs via Phabricator via Openmp-commits openmp-commits at lists.llvm.org
Tue Oct 31 15:08:24 PDT 2017


adam-azarchs updated this revision to Diff 121070.
adam-azarchs added a comment.

Thanks everyone for the switch response and comments.

This fixes the test to properly destroy the locks when done.
REPITITIONS was previously not doing anything for this test - the
test is (indirectly) exercising the global lock pool, which does not
get re-initialized with each repetition.  This restructuring of the
test makes more sense from that perspective.  I've again
confirmed that the test fails without the patch to kmp_lock.cpp
and passes with the patch.

I've also removed the edit to testsuite, as suggested.


https://reviews.llvm.org/D39439

Files:
  runtime/src/kmp_lock.cpp
  runtime/test/lock/omp_init_lock.c


Index: runtime/test/lock/omp_init_lock.c
===================================================================
--- runtime/test/lock/omp_init_lock.c
+++ runtime/test/lock/omp_init_lock.c
@@ -0,0 +1,42 @@
+// RUN: %libomp-compile-and-run
+#include "omp_testsuite.h"
+#include <stdio.h>
+
+// This should be slightly less than KMP_I_LOCK_CHUNK, which is 1024
+#define LOCKS_PER_ITER 1000
+#define ITERATIONS (REPETITIONS + 1)
+
+// This tests concurrently using locks on one thread while initializing new
+// ones on another thread.  This exercises the global lock pool.
+int test_omp_init_lock() {
+  int i;
+  omp_lock_t lcks[ITERATIONS * LOCKS_PER_ITER];
+#pragma omp parallel for schedule(static) num_threads(NUM_TASKS)
+  for (i = 0; i < ITERATIONS; i++) {
+    int j;
+    omp_lock_t *my_lcks = &lcks[i * LOCKS_PER_ITER];
+    for (j = 0; j < LOCKS_PER_ITER; j++) {
+      omp_init_lock(&my_lcks[j]);
+    }
+    for (j = 0; j < LOCKS_PER_ITER * 100; j++) {
+      omp_set_lock(&my_lcks[j % LOCKS_PER_ITER]);
+      omp_unset_lock(&my_lcks[j % LOCKS_PER_ITER]);
+    }
+  }
+  // Wait until all repititions are done.  The test is exercising growth of
+  // the global lock pool, which does not shrink when no locks are allocated.
+  {
+    int j;
+    for (j = 0; j < ITERATIONS * LOCKS_PER_ITER; j++) {
+      omp_destroy_lock(&lcks[j]);
+    }
+  }
+
+  return 0;
+}
+
+int main() {
+  // No use repeating this test, since it's exercising a private global pool
+  // which is not reset between test iterations.
+  return test_omp_init_lock();
+}
Index: runtime/src/kmp_lock.cpp
===================================================================
--- runtime/src/kmp_lock.cpp
+++ runtime/src/kmp_lock.cpp
@@ -3059,11 +3059,12 @@
     if (idx == __kmp_i_lock_table.size) {
       // Double up the space for block pointers
       int row = __kmp_i_lock_table.size / KMP_I_LOCK_CHUNK;
-      kmp_indirect_lock_t **old_table = __kmp_i_lock_table.table;
-      __kmp_i_lock_table.table = (kmp_indirect_lock_t **)__kmp_allocate(
+      kmp_indirect_lock_t **new_table = (kmp_indirect_lock_t **)__kmp_allocate(
           2 * row * sizeof(kmp_indirect_lock_t *));
-      KMP_MEMCPY(__kmp_i_lock_table.table, old_table,
+      KMP_MEMCPY(new_table, __kmp_i_lock_table.table,
                  row * sizeof(kmp_indirect_lock_t *));
+      kmp_indirect_lock_t **old_table = __kmp_i_lock_table.table;
+      __kmp_i_lock_table.table = new_table;
       __kmp_free(old_table);
       // Allocate new objects in the new blocks
       for (int i = row; i < 2 * row; ++i)


-------------- next part --------------
A non-text attachment was scrubbed...
Name: D39439.121070.patch
Type: text/x-patch
Size: 2560 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/openmp-commits/attachments/20171031/37d6a252/attachment.bin>


More information about the Openmp-commits mailing list