[compiler-rt] [compiler-rt][builtins] Switch libatomic locks to pthread_mutex_t. (PR #94374)
via llvm-commits
llvm-commits at lists.llvm.org
Tue Jun 4 09:32:23 PDT 2024
https://github.com/Logikable updated https://github.com/llvm/llvm-project/pull/94374
>From 54f0e4100edda9043084748c9ec67656bc5dc26a Mon Sep 17 00:00:00 2001
From: Sean Luchen <seanluchen at google.com>
Date: Tue, 4 Jun 2024 16:10:45 +0000
Subject: [PATCH] [compiler-rt][builtins] Switch libatomic locks to
pthread_mutex_t.
When an uninstrumented libatomic is used with a TSan instrumented
memcpy, TSan may report a data race in circumstances where writes are
arguably safe.
This occurs because __atomic_compare_exchange won't be instrumented in
an uninstrumented libatomic, so TSan doesn't know that the subsequent
memcpy is race-free.
On the other hand, pthread_mutex_(un)lock will be intercepted by TSan,
meaning an uninstrumented libatomic will not report this false-positive.
pthread_mutexes also may try a number of different strategies to acquire
the lock, which may bound the amount of time a thread has to wait for a
lock during contention.
While pthread_mutex_lock has a larger overhead (due to the function
call and some dispatching), a dispatch to libatomic already predicates
a lack of performance guarantees.
---
compiler-rt/lib/builtins/atomic.c | 17 +++++------------
1 file changed, 5 insertions(+), 12 deletions(-)
diff --git a/compiler-rt/lib/builtins/atomic.c b/compiler-rt/lib/builtins/atomic.c
index 852bb20f08672..159c364e2fb89 100644
--- a/compiler-rt/lib/builtins/atomic.c
+++ b/compiler-rt/lib/builtins/atomic.c
@@ -94,19 +94,12 @@ static Lock locks[SPINLOCK_COUNT]; // initialized to OS_SPINLOCK_INIT which is 0
#else
_Static_assert(__atomic_always_lock_free(sizeof(uintptr_t), 0),
"Implementation assumes lock-free pointer-size cmpxchg");
-typedef _Atomic(uintptr_t) Lock;
+#include <pthread.h>
+typedef pthread_mutex_t Lock;
/// Unlock a lock. This is a release operation.
-__inline static void unlock(Lock *l) {
- __c11_atomic_store(l, 0, __ATOMIC_RELEASE);
-}
-/// Locks a lock. In the current implementation, this is potentially
-/// unbounded in the contended case.
-__inline static void lock(Lock *l) {
- uintptr_t old = 0;
- while (!__c11_atomic_compare_exchange_weak(l, &old, 1, __ATOMIC_ACQUIRE,
- __ATOMIC_RELAXED))
- old = 0;
-}
+__inline static void unlock(Lock *l) { pthread_mutex_unlock(l); }
+/// Locks a lock.
+__inline static void lock(Lock *l) { pthread_mutex_lock(l); }
/// locks for atomic operations
static Lock locks[SPINLOCK_COUNT];
#endif
More information about the llvm-commits
mailing list