[PATCH] D26232: [XRay][compiler-rt] XRay Buffer Queue

Serge Rogatch via llvm-commits llvm-commits at lists.llvm.org
Wed Nov 16 11:28:22 PST 2016


rSerge added inline comments.


================
Comment at: lib/xray/xray_buffer_queue.cc:24
+  for (auto &Buf : Buffers) {
+    void *Tmp = malloc(BufferSize);
+    Buf.Buffer = Tmp;
----------------
dberris wrote:
> rSerge wrote:
> > Why not to do just 1 `malloc` and then distribute the pointers with offsets?
> The idea is to have differentiated buffers that can be treated as individual thunks of memory. It also allows us to check for when malloc fails.
Ok, I see.


================
Comment at: lib/xray/xray_buffer_queue.cc:32
+std::error_code BufferQueue::getBuffer(Buffer &Buf) {
+  if (Finalizing.load(std::memory_order_acquire))
+    return std::make_error_code(std::errc::state_not_recoverable);
----------------
dberris wrote:
> rSerge wrote:
> > Shouldn't this be checked inside the mutex lock?
> Nope, `Finalizing` is atomic, and is already synchronised -- so we avoid locking the mutex when the BufferQueue is already finalizing.
What if `finalize()` is called between `Finalizing.load(std::memory_order_acquire)` and `std::lock_guard<std::mutex> Guard(Mutex);` here? The description for this data structure claims that `getBuffer` requests must be denied after `finalize()` is called, but this doesn't happen in this scenario. To enforce that invariant, the finalization flag must be set and read with mutex locked, so the flag itself doesn't have to be atomic (because the mutex provides the necessary acquire/release semantics).


================
Comment at: lib/xray/xray_buffer_queue.cc:46
+  std::lock_guard<std::mutex> Guard(Mutex);
+  Buffers.push_back(Buf);
+  Buf.Buffer = nullptr;
----------------
dberris wrote:
> rSerge wrote:
> > You are making it a queue, so the least recently used memory region is popped. This is bad for CPU cache. Why not to make it a stack, so that the most recently used buffer is returned first?
> > It needs just a change to `push_front` here... and perhaps renaming the data structure.
> The point of the queue is so that we can ensure a temporal bound, i.e. we can keep around buffers that have been filled before. We explicitly want to actually keep the data around in the buffers, so that operations that have happened in the past are kept.
> 
> This is a key concept that the flight data recorder mode described in the whitepaper actually requires. Otherwise, if we made this a stack, then only the most recent operations will ever be kept (as opposed to a running log of things that have happened in the past).
So is this data structure just dropping the least recently used data? I didn't get that initially.


https://reviews.llvm.org/D26232





More information about the llvm-commits mailing list