<div dir="ltr"><div class="gmail_extra">Having thought a bit about the best strategy to solve this, I think we should use a tradeoff of memory to reduce contention. I don't really like any of the other options as much, if we can get that one to work. Here is my specific suggestion:</div>
<div class="gmail_extra"><br></div><div class="gmail_extra"><div class="gmail_quote">On Thu, Apr 17, 2014 at 5:21 AM, Kostya Serebryany <span dir="ltr"><<a href="mailto:kcc@google.com" target="_blank">kcc@google.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>- per-cpu counters (not portable, requires very modern kernel with lots of patches)<br></div>
<div>- sharded counters: each counter represented as N counters sitting in different cache lines. Every thread accesses the counter with index TID%N. Solves the problem partially, better with larger values of N, but then again it costs RAM.</div>
</blockquote><div><br></div><div>I think we should combine these somewhat.</div><div><br></div><div>At an abstract level, I think we should design the profiling to support up to N shards of counters.</div><div><br></div><div>
I think we should have a dynamic number of shards if possible. The goal here is that if you never need multiple shards (single threaded) you pay essentially zero cost. I would have a global number of shards that changes rarely, and re-compute it on entry to each function with something along the lines of:</div>
<div><br></div><div>if (thread-ID != main's thread-ID && shard_count == 1) {</div><div> shard_count = std::min(MAX, std::max(NUMBER_OF_THREADS, NUMBER_OF_CORES));</div><div> // if shard_count changed with this, we can also call a library routine here that does the work of allocating the actual extra shards.</div>
<div>}</div><div><br></div><div>MAX is a fixed cap so even on systems with 100s of cores we don't do something silly. NUBER_OF_THREADS, if supported on the OS, can limit the shards when we only have a small number of threads in the program. NUMBER_OF_CORES, if supported on the OS, can limit the shards. If we don't have the number of threads, we just use the number of cores. If we don't have the number of cores, we can just guess 8 (or something).</div>
<div><br></div><div>Then, we can gracefully fall back on the following strategies to pick an index into the shards:</div><div><br></div><div>- Per-core non-atomic counter updates (if we support them) using restartable sequences</div>
<div>- Use a core-ID as the index into the shards to statistically minimize the contention, and do the increments atomically so it remains correct even if the core-ID load happens before a core migration and the counter increment occurs afterward</div>
<div>- Use (thread-ID % number of cores) if we don't have support for getting a core-ID from the target OS. This will still have a reasonable distribution I suspect, even if not perfect.</div><div><br></div><div><br></div>
<div>Finally, I wouldn't merge on shutdown if possible. I would just write larger raw profile data for multithreaded runs, and let the profdata tool merge them.</div><div><br></div><div><br></div><div>If this is still too much memory, then I would suggest doing the above, but doing it independently for each function so that only those functions actually called via multithreaded code end up sharding their counters.</div>
<div><br></div><div><br></div><div>I think this would be reasonably straightforward to implement, not significantly grow the cost of single-threaded instrumentation, and largely mitigate the contention on the counters. It can benefit from advanced hooks into the OS when those are available, but seems pretty easy to implement on roughly any OS with reasonable tradeoffs. The only really hard requirement is the ability to get a thread-id, but I think that is fairly reasonable (C++ even makes this essentially mandatory).</div>
<div><br></div><div>Thoughts?</div></div></div></div>