[compiler-rt] [compiler-rt][builtins] Switch libatomic locks to pthread_mutex_t. (PR #94374)

via llvm-commits llvm-commits at lists.llvm.org
Tue Jun 4 09:23:23 PDT 2024


https://github.com/Logikable created https://github.com/llvm/llvm-project/pull/94374

When an uninstrumented libatomic is used with a TSan instrumented memcpy, TSan may report a data race in circumstances where writes are arguably safe.

This occurs because __atomic_compare_exchange won't be instrumented in an uninstrumented libatomic, so TSan doesn't know that the subsequent memcpy is race-free.

On the other hand, pthread_mutex_(un)lock will be intercepted by TSan, meaning an uninstrumented libatomic will not report this false-positive.

While pthread_mutex_lock has a larger overhead (due to the function call), a dispatch to libatomic already predicates a lack of performance guarantees.

>From 1f8739f26bd4789d562e0116bc1e499985201fca Mon Sep 17 00:00:00 2001
From: Sean Luchen <seanluchen at google.com>
Date: Tue, 4 Jun 2024 16:10:45 +0000
Subject: [PATCH] [compiler-rt][builtins] Switch libatomic locks to
 pthread_mutex_t.

When an uninstrumented libatomic is used with a TSan instrumented
memcpy, TSan may report a data race in circumstances where writes are
arguably safe.

This occurs because __atomic_compare_exchange won't be instrumented in
an uninstrumented libatomic, so TSan doesn't know that the subsequent
memcpy is race-free.

On the other hand, pthread_mutex_(un)lock will be intercepted by TSan,
meaning an uninstrumented libatomic will not report this false-positive.

While pthread_mutex_lock has a larger overhead (due to the function
call), a dispatch to libatomic already predicates a lack of performance
guarantees.
---
 compiler-rt/lib/builtins/atomic.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/compiler-rt/lib/builtins/atomic.c b/compiler-rt/lib/builtins/atomic.c
index 852bb20f08672..aaba648e4a175 100644
--- a/compiler-rt/lib/builtins/atomic.c
+++ b/compiler-rt/lib/builtins/atomic.c
@@ -94,18 +94,16 @@ static Lock locks[SPINLOCK_COUNT]; // initialized to OS_SPINLOCK_INIT which is 0
 #else
 _Static_assert(__atomic_always_lock_free(sizeof(uintptr_t), 0),
                "Implementation assumes lock-free pointer-size cmpxchg");
-typedef _Atomic(uintptr_t) Lock;
+#include <pthread.h>
+typedef pthread_mutex_t Lock;
 /// Unlock a lock.  This is a release operation.
 __inline static void unlock(Lock *l) {
-  __c11_atomic_store(l, 0, __ATOMIC_RELEASE);
+  pthread_mutex_unlock(l);
 }
 /// Locks a lock.  In the current implementation, this is potentially
 /// unbounded in the contended case.
 __inline static void lock(Lock *l) {
-  uintptr_t old = 0;
-  while (!__c11_atomic_compare_exchange_weak(l, &old, 1, __ATOMIC_ACQUIRE,
-                                             __ATOMIC_RELAXED))
-    old = 0;
+  pthread_mutex_lock(l);
 }
 /// locks for atomic operations
 static Lock locks[SPINLOCK_COUNT];



More information about the llvm-commits mailing list