<html>
<head>
<base href="https://bugs.llvm.org/">
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW - Race condition in __kmp_lookup_indirect_lock"
href="https://bugs.llvm.org/show_bug.cgi?id=34050">34050</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>Race condition in __kmp_lookup_indirect_lock
</td>
</tr>
<tr>
<th>Product</th>
<td>OpenMP
</td>
</tr>
<tr>
<th>Version</th>
<td>unspecified
</td>
</tr>
<tr>
<th>Hardware</th>
<td>PC
</td>
</tr>
<tr>
<th>OS</th>
<td>Linux
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>normal
</td>
</tr>
<tr>
<th>Priority</th>
<td>P
</td>
</tr>
<tr>
<th>Component</th>
<td>Runtime Library
</td>
</tr>
<tr>
<th>Assignee</th>
<td>unassignedbugs@nondot.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>adam.azarchs@10xgenomics.com
</td>
</tr>
<tr>
<th>CC</th>
<td>llvm-bugs@lists.llvm.org
</td>
</tr></table>
<p>
<div>
<pre>If a user is allocating locks on one thread while setting/unsetting them on
another, there can be a race between __kmp_lookup_indirect_lock and
__kmp_allocate_indirect_lock around access to __kmp_i_lock_table if
__kmp_allocate_indirect_lock is required to grow the pool.
__kmp_allocate_indirect_lock acquires __kmp_global_lock to protect access to
__kmp_i_lock_table, but the various methods which read __kmp_i_lock_table
(generally through __kmp_lookup_indirect_lock or __kmp_indirect_get_location)
do not take that lock, for sound performance reasons. Most writers to
__kmp_i_lock_table use __kmp_lock_table_insert, which does the memcpy before
updating the __kmp_i_lock_table.table pointer, so readers see a valid state,
but __kmp_allocate_indirect_lock does not. Either the order of operations
should be changed or it should be updated to use __kmp_lock_table_insert.
A simple reproducing case, which will usually segfault:
```
#include <omp.h>
#include <vector>
int main(int argc, char *argv[]) {
#pragma omp parallel for schedule(dynamic, 1)
for (size_t z = 0; z < 100; z++) {
size_t Q = 10000 * z;
std::vector<omp_lock_t> locks(Q);
for (size_t i = 0; i < Q; i++) {
omp_init_lock(&locks[i]);
}
for (size_t i = 0; i < 100 * Q; i++) {
auto lock = &locks[i / 100];
omp_set_lock(lock);
omp_unset_lock(lock);
}
for (size_t i = 0; i < Q; i++) {
omp_destroy_lock(&locks[i]);
}
}
}
```
If run under tsan, you get messages consistent with the above diagnosis:
==================
WARNING: ThreadSanitizer: data race (pid=97042)
Read of size 8 at 0x7b3800000040 by thread T40:
#0 memcpy <null> (MakeCrash+0x45045d)
#1 __kmp_allocate_indirect_lock <null> (MakeCrash+0x5791f8)
#2 __kmp_invoke_microtask <null> (MakeCrash+0x584932)
Previous write of size 8 at 0x7b3800000040 by thread T47:
#0 malloc <null> (MakeCrash+0x442645)
#1 ___kmp_allocate <null> (MakeCrash+0x589694)
#2 __kmp_invoke_microtask <null> (MakeCrash+0x584932)
Location is heap block of size 224 at 0x7b3800000000 allocated by thread T47:
#0 malloc <null> (MakeCrash+0x442645)
#1 ___kmp_allocate <null> (MakeCrash+0x589694)
#2 __kmp_invoke_microtask <null> (MakeCrash+0x584932)
Thread T40 (tid=97083, running) created by main thread at:
#0 pthread_create <null> (MakeCrash+0x444781)
#1 __kmp_create_worker <null> (MakeCrash+0x57d644)
#2 __libc_start_main <null> (libc.so.6+0x21b34)
Thread T47 (tid=97090, running) created by main thread at:
#0 pthread_create <null> (MakeCrash+0x444781)
#1 __kmp_create_worker <null> (MakeCrash+0x57d644)
#2 __libc_start_main <null> (libc.so.6+0x21b34)
SUMMARY: ThreadSanitizer: data race (/mnt/home/adam/bin/MakeCrash+0x45045d) in
memcpy
==================</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are on the CC list for the bug.</li>
</ul>
</body>
</html>