[compiler-rt] 47625e4 - Fix race in the implementation of __tsan_acquire() (#84923)

via llvm-commits llvm-commits at lists.llvm.org
Tue Mar 12 15:36:52 PDT 2024


Author: Dave Clausen
Date: 2024-03-12T15:36:48-07:00
New Revision: 47625e47db1d8fef6936ef48103e9aeb1fa3d328

URL: https://github.com/llvm/llvm-project/commit/47625e47db1d8fef6936ef48103e9aeb1fa3d328
DIFF: https://github.com/llvm/llvm-project/commit/47625e47db1d8fef6936ef48103e9aeb1fa3d328.diff

LOG: Fix race in the implementation of __tsan_acquire() (#84923)

`__tsan::Acquire()`, which is called by `__tsan_acquire()`, has a
performance optimization which attempts to avoid acquiring the atomic
variable's mutex if the variable has no associated memory model state.

However, if the atomic variable was recently written to by a
`compare_exchange_weak/strong` on another thread, the memory model state
may be created *after* the atomic variable is updated. This is a data
race, and can cause the thread calling `Acquire()` to not realize that
the atomic variable was previously written to by another thread.

Specifically, if you have code that writes to an atomic variable using
`compare_exchange_weak/strong`, and then in another thread you read the
value using a relaxed load, followed by an
`atomic_thread_fence(memory_order_acquire)`, followed by a call to
`__tsan_acquire()`, TSAN may not realize that the store happened before
the fence, and so it will complain about any other variables you access
from both threads if the thread-safety of those accesses depended on the
happens-before relationship between the store and the fence.

This change eliminates the unsafe optimization in `Acquire()`. Now,
`Acquire()` acquires the mutex before checking for the existence of the
memory model state.

Added: 
    compiler-rt/test/tsan/compare_exchange_acquire_fence.cpp

Modified: 
    compiler-rt/lib/tsan/rtl/tsan_rtl_mutex.cpp

Removed: 
    


################################################################################
diff  --git a/compiler-rt/lib/tsan/rtl/tsan_rtl_mutex.cpp b/compiler-rt/lib/tsan/rtl/tsan_rtl_mutex.cpp
index 2e978852ea7d37..2a8aa1915c9aeb 100644
--- a/compiler-rt/lib/tsan/rtl/tsan_rtl_mutex.cpp
+++ b/compiler-rt/lib/tsan/rtl/tsan_rtl_mutex.cpp
@@ -446,9 +446,9 @@ void Acquire(ThreadState *thr, uptr pc, uptr addr) {
   if (!s)
     return;
   SlotLocker locker(thr);
+  ReadLock lock(&s->mtx);
   if (!s->clock)
     return;
-  ReadLock lock(&s->mtx);
   thr->clock.Acquire(s->clock);
 }
 

diff  --git a/compiler-rt/test/tsan/compare_exchange_acquire_fence.cpp b/compiler-rt/test/tsan/compare_exchange_acquire_fence.cpp
new file mode 100644
index 00000000000000..b9fd0c5ad21f24
--- /dev/null
+++ b/compiler-rt/test/tsan/compare_exchange_acquire_fence.cpp
@@ -0,0 +1,43 @@
+// RUN: %clangxx_tsan -O1 %s -o %t && %run %t 2>&1
+// This is a correct program and tsan should not report a race.
+//
+// Verify that there is a happens-before relationship between a
+// memory_order_release store that happens as part of a successful
+// compare_exchange_strong(), and an atomic_thread_fence(memory_order_acquire)
+// that happens after a relaxed load.
+
+#include <atomic>
+#include <sanitizer/tsan_interface.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <thread>
+
+std::atomic<bool> a;
+unsigned int b;
+constexpr int loops = 100000;
+
+void Thread1() {
+  for (int i = 0; i < loops; ++i) {
+    while (a.load(std::memory_order_acquire)) {
+    }
+    b = i;
+    bool expected = false;
+    a.compare_exchange_strong(expected, true, std::memory_order_acq_rel);
+  }
+}
+
+int main() {
+  std::thread t(Thread1);
+  unsigned int sum = 0;
+  for (int i = 0; i < loops; ++i) {
+    while (!a.load(std::memory_order_relaxed)) {
+    }
+    std::atomic_thread_fence(std::memory_order_acquire);
+    __tsan_acquire(&a);
+    sum += b;
+    a.store(false, std::memory_order_release);
+  }
+  t.join();
+  fprintf(stderr, "DONE: %u\n", sum);
+  return 0;
+}


        


More information about the llvm-commits mailing list