[compiler-rt] [compiler-rt][builtins] Add opt-in pthread_mutex_t locks to libatomic (PR #95326)

via llvm-commits llvm-commits at lists.llvm.org
Wed Jun 12 15:51:23 PDT 2024


https://github.com/Logikable created https://github.com/llvm/llvm-project/pull/95326

When an uninstrumented libatomic is used with a TSan instrumented memcpy, TSan may report a data race in circumstances where writes are arguably safe.

This occurs because __atomic_compare_exchange won't be instrumented in an uninstrumented libatomic, so TSan doesn't know that the subsequent memcpy is race-free.

On the other hand, pthread_mutex_(un)lock will be intercepted by TSan, meaning an uninstrumented libatomic will not report this false-positive.

pthread_mutexes also may try a number of different strategies to acquire the lock, which may bound the amount of time a thread has to wait for a lock during contention.

While pthread_mutex_lock has a larger overhead (due to the function call and some dispatching), a dispatch to libatomic already predicates a lack of performance guarantees.

>From fea2dd20f175f9f9c91977ab08b3536aff0532cf Mon Sep 17 00:00:00 2001
From: Sean Luchen <seanluchen at google.com>
Date: Wed, 12 Jun 2024 22:48:57 +0000
Subject: [PATCH] [compiler-rt][builtins] Add opt-in pthread_mutex_t locks to
 libatomic

When an uninstrumented libatomic is used with a TSan instrumented
memcpy, TSan may report a data race in circumstances where writes are
arguably safe.

This occurs because __atomic_compare_exchange won't be instrumented in
an uninstrumented libatomic, so TSan doesn't know that the subsequent
memcpy is race-free.

On the other hand, pthread_mutex_(un)lock will be intercepted by TSan,
meaning an uninstrumented libatomic will not report this false-positive.

pthread_mutexes also may try a number of different strategies to acquire
the lock, which may bound the amount of time a thread has to wait for a
lock during contention.

While pthread_mutex_lock has a larger overhead (due to the function
call and some dispatching), a dispatch to libatomic already predicates
a lack of performance guarantees.
---
 compiler-rt/lib/builtins/CMakeLists.txt |  8 ++++++++
 compiler-rt/lib/builtins/atomic.c       | 14 ++++++++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/compiler-rt/lib/builtins/CMakeLists.txt b/compiler-rt/lib/builtins/CMakeLists.txt
index 6778ae1c35263..ba81c78f16608 100644
--- a/compiler-rt/lib/builtins/CMakeLists.txt
+++ b/compiler-rt/lib/builtins/CMakeLists.txt
@@ -237,6 +237,14 @@ if(NOT FUCHSIA AND NOT COMPILER_RT_BAREMETAL_BUILD)
   )
 endif()
 
+option(COMPILER_RT_LIBATOMIC_USE_PTHREAD
+  "Whether libatomic should use pthreads if available."
+  Off)
+
+if(COMPILER_RT_LIBATOMIC_USE_PTHREAD)
+  add_compile_definitions(_LIBATOMIC_USE_PTHREAD)
+endif()
+
 if(COMPILER_RT_HAS_ATOMIC_KEYWORD AND NOT COMPILER_RT_EXCLUDE_ATOMIC_BUILTIN)
   set(GENERIC_SOURCES
     ${GENERIC_SOURCES}
diff --git a/compiler-rt/lib/builtins/atomic.c b/compiler-rt/lib/builtins/atomic.c
index 852bb20f08672..aded25d9baa98 100644
--- a/compiler-rt/lib/builtins/atomic.c
+++ b/compiler-rt/lib/builtins/atomic.c
@@ -12,7 +12,7 @@
 //
 //  1) This code must work with C programs that do not link to anything
 //     (including pthreads) and so it should not depend on any pthread
-//     functions.
+//     functions. If the user wishes to opt into using pthreads, they may do so.
 //  2) Atomic operations, rather than explicit mutexes, are most commonly used
 //     on code where contended operations are rate.
 //
@@ -56,7 +56,17 @@ static const long SPINLOCK_MASK = SPINLOCK_COUNT - 1;
 // defined.  Each platform should define the Lock type, and corresponding
 // lock() and unlock() functions.
 ////////////////////////////////////////////////////////////////////////////////
-#if defined(__FreeBSD__) || defined(__DragonFly__)
+#if defined(_LIBATOMIC_USE_PTHREAD)
+#include <pthread.h>
+typedef pthread_mutex_t Lock;
+/// Unlock a lock.  This is a release operation.
+__inline static void unlock(Lock *l) { pthread_mutex_unlock(l); }
+/// Locks a lock.
+__inline static void lock(Lock *l) { pthread_mutex_lock(l); }
+/// locks for atomic operations
+static Lock locks[SPINLOCK_COUNT];
+
+#elif defined(__FreeBSD__) || defined(__DragonFly__)
 #include <errno.h>
 // clang-format off
 #include <sys/types.h>



More information about the llvm-commits mailing list