[PATCH] D31381: [XRay][compiler-rt] Use sanitizer_common's atomic ops

Dean Michael Berris via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Mar 27 00:17:27 PDT 2017


dberris added inline comments.


================
Comment at: lib/xray/xray_buffer_queue.h:44
   std::deque<std::tuple<Buffer, bool>> Buffers;
-  std::mutex Mutex;
+  __sanitizer::SpinMutex Mutex;
   std::unordered_set<void *> OwnedBuffers;
----------------
pelikan wrote:
> This lock is being held over all Buffers (which may not be that bad actually).  It is also held during the only use of apply(), which calls retryingWriteAll() and therefore will take a lot of time, including passively waiting on system calls.  It would therefore make sense to use the blocking mutex here, to avoid burning CPU for potentially milliseconds.
> 
> Is there any downside to a blocking lock in this scenario?  The other ones look fine as the locks only protect very small regions of memory.
For the usage patterns we have, this should not be much of a problem. Let me explain why:

- We expect that getting buffers and returning buffers to happen more frequently than we do calling apply(...). That should be fairly quick.
- The caller to apply usually is a thread that has already ensured that all threads that want to write into the buffers will have returned the buffers already. While this is an implementation detail at a higher level (in the FDR logging implementation), we kind-of know it from here.

In the review thread for this original change, we decided to not hand-write our own spinlock, but it's more apt for the get and release cases. I'm fine using an explicitly blocking mutex here, but that means we're trading off the potential chance of spinning for a while (fairly low based on the way we're using this data structure) on some threads, with the potential that when we're doing FDR mode that we're blocking threads.

It's easy enough to switch anyway, so I've changed it to use a blocking mutex.


https://reviews.llvm.org/D31381





More information about the llvm-commits mailing list