[PATCH] D18500: [tsan] Add support for OS X OSAtomic* functions

Dmitry Vyukov via llvm-commits llvm-commits at lists.llvm.org
Mon Apr 4 01:01:11 PDT 2016


dvyukov added inline comments.

================
Comment at: lib/tsan/rtl/tsan_interceptors_mac.cc:121
@@ +120,3 @@
+  char *byte_ptr = ((char *)ptr) + (n >> 3);
+  __tsan_atomic8_fetch_or((a8 *)byte_ptr, 0, mo_acq_rel);
+  return result;
----------------
This is not correct. If you separate the operation itself and the memory barrier, release barrier needs to precede the operation itself (while acquire barrier needs to follow the operation itself). If you split this into acquire+operation+release, it will increase cost of the operation significantly, as we will lock sync mutex twice and process vector clock twice.
I would suggest to use __tsan_atomic8_fetch_or/and to implement these bit flag operations. For NoBarrier versions you can pass mo_relaxed and it should be as cheap as calling the REAL function.

================
Comment at: lib/tsan/rtl/tsan_interceptors_mac.cc:135
@@ +134,3 @@
+  char *byte_ptr = ((char *)ptr) + (n >> 3);
+  __tsan_atomic8_fetch_or((a8 *)byte_ptr, 0, mo_acq_rel);
+  return result;
----------------
same here

================
Comment at: lib/tsan/rtl/tsan_interceptors_mac.cc:143
@@ +142,3 @@
+  REAL(OSAtomicEnqueue)(list, item, offset);
+  __tsan_atomic64_fetch_or((volatile a64 *)item, 0, mo_release);
+}
----------------
Release must precede the operation. Also I would use __tsan_release(item).
If you do it after, then consumer can dequeue the item and do acquire before we do release. This will lead to false positives.

================
Comment at: lib/tsan/rtl/tsan_interceptors_mac.cc:149
@@ +148,3 @@
+  void *item = REAL(OSAtomicDequeue)(list, offset);
+  if (item) __tsan_atomic64_load((volatile a64 *)item, mo_acquire);
+  return item;
----------------
__tsan_acquire(item)

================
Comment at: lib/tsan/rtl/tsan_interceptors_mac.cc:160
@@ +159,3 @@
+  REAL(OSAtomicFifoEnqueue)(list, item, offset);
+  __tsan_atomic64_fetch_or((volatile a64 *)item, 0, mo_release);
+}
----------------
__tsan_release(item) before the operation.

================
Comment at: lib/tsan/rtl/tsan_interceptors_mac.cc:167
@@ +166,3 @@
+  void *item = REAL(OSAtomicFifoDequeue)(list, offset);
+  if (item) __tsan_atomic64_load((volatile a64 *)item, mo_acquire);
+  return item;
----------------
__tsan_acquire(item)


http://reviews.llvm.org/D18500





More information about the llvm-commits mailing list