[llvm-bugs] [Bug 34050] New: Race condition in __kmp_lookup_indirect_lock

via llvm-bugs llvm-bugs at lists.llvm.org
Thu Aug 3 11:05:02 PDT 2017


https://bugs.llvm.org/show_bug.cgi?id=34050

            Bug ID: 34050
           Summary: Race condition in __kmp_lookup_indirect_lock
           Product: OpenMP
           Version: unspecified
          Hardware: PC
                OS: Linux
            Status: NEW
          Severity: normal
          Priority: P
         Component: Runtime Library
          Assignee: unassignedbugs at nondot.org
          Reporter: adam.azarchs at 10xgenomics.com
                CC: llvm-bugs at lists.llvm.org

If a user is allocating locks on one thread while setting/unsetting them on
another, there can be a race between __kmp_lookup_indirect_lock and
__kmp_allocate_indirect_lock around access to __kmp_i_lock_table if
__kmp_allocate_indirect_lock is required to grow the pool.

__kmp_allocate_indirect_lock acquires __kmp_global_lock to protect access to
__kmp_i_lock_table, but the various methods which read __kmp_i_lock_table
(generally through __kmp_lookup_indirect_lock or __kmp_indirect_get_location)
do not take that lock, for sound performance reasons.  Most writers to
__kmp_i_lock_table use __kmp_lock_table_insert, which does the memcpy before
updating the __kmp_i_lock_table.table pointer, so readers see a valid state,
but __kmp_allocate_indirect_lock does not.  Either the order of operations
should be changed or it should be updated to use __kmp_lock_table_insert.

A simple reproducing case, which will usually segfault:
```
#include <omp.h>

#include <vector>

int main(int argc, char *argv[]) {
#pragma omp parallel for schedule(dynamic, 1)
  for (size_t z = 0; z < 100; z++) {
    size_t Q = 10000 * z;
    std::vector<omp_lock_t> locks(Q);
    for (size_t i = 0; i < Q; i++) {
      omp_init_lock(&locks[i]);
    }
    for (size_t i = 0; i < 100 * Q; i++) {
      auto lock = &locks[i / 100];
      omp_set_lock(lock);
      omp_unset_lock(lock);
    }
    for (size_t i = 0; i < Q; i++) {
      omp_destroy_lock(&locks[i]);
    }
  }
}
```

If run under tsan, you get messages consistent with the above diagnosis:
==================
WARNING: ThreadSanitizer: data race (pid=97042)
  Read of size 8 at 0x7b3800000040 by thread T40:
    #0 memcpy <null> (MakeCrash+0x45045d)
    #1 __kmp_allocate_indirect_lock <null> (MakeCrash+0x5791f8)
    #2 __kmp_invoke_microtask <null> (MakeCrash+0x584932)

  Previous write of size 8 at 0x7b3800000040 by thread T47:
    #0 malloc <null> (MakeCrash+0x442645)
    #1 ___kmp_allocate <null> (MakeCrash+0x589694)
    #2 __kmp_invoke_microtask <null> (MakeCrash+0x584932)

  Location is heap block of size 224 at 0x7b3800000000 allocated by thread T47:
    #0 malloc <null> (MakeCrash+0x442645)
    #1 ___kmp_allocate <null> (MakeCrash+0x589694)
    #2 __kmp_invoke_microtask <null> (MakeCrash+0x584932)

  Thread T40 (tid=97083, running) created by main thread at:
    #0 pthread_create <null> (MakeCrash+0x444781)
    #1 __kmp_create_worker <null> (MakeCrash+0x57d644)
    #2 __libc_start_main <null> (libc.so.6+0x21b34)

  Thread T47 (tid=97090, running) created by main thread at:
    #0 pthread_create <null> (MakeCrash+0x444781)
    #1 __kmp_create_worker <null> (MakeCrash+0x57d644)
    #2 __libc_start_main <null> (libc.so.6+0x21b34)

SUMMARY: ThreadSanitizer: data race (/mnt/home/adam/bin/MakeCrash+0x45045d) in
memcpy
==================

-- 
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20170803/7bd3589a/attachment.html>


More information about the llvm-bugs mailing list