<div dir="ltr">One more proposal: simple per-thread counters allocated with mmap(MAP_NORESERVE), the same trick that works so well for asan/tsan/msan.<div><br></div><div>Chrome has ~3M basic blocks instrumented for coverage, </div>
<div>so even largest applications will hardly have more than, say, 10M basic blocks</div><div>(number can be configurable at application start time). This gives us 80Mb for the array of 64-bit counters. </div><div>That's a lot if multiplied by the number of threads, but the MAP_NORESERVE trick solves the problem -- </div>
<div>each thread will only touch the pages where it actually increment the counters. </div><div>On thread exit the whole 80Mb counter array are will be merged into a central array of counters and then discarded, </div><div>
but we can also postpone this until another new thread is created -- then we just reuse the counter array. </div><div><br></div><div>This brings two challenges. <br></div><div><br></div><div>#1. The basic blocks should be numbered sequentially. I see only one way to accomplish this: with the help of linker (and dynamic linker for DSOs).</div>
<div>The compiler would emit code using offsets that will later be transformed into constants by the linker. </div><div>Not sure if any existing linker support this kind of thing. Anyone? </div><div><br></div><div>#2. How to access the per-thread counter array. If we simply store the pointer to the array in TLS, the instrumentation will be more expensive just because of need to load and keep this pointer. </div>
<div>If the counter array is part of TLS itself, we'll have to intrude into the pthread library (or wrap it) so that this part of TLS is mapped with MAP_NORESERVE. </div><div><br></div><div>This is all far from trivial, but it</div>
<div> - has the same performance in single-threaded case compared to the current design </div><div> - has no contention in multi-threaded case. </div><div> - should use moderate amount of RAM due to the MAP_NORESERVE trick.</div>
<div><br></div><div>?? </div><div><br></div><div>--kcc </div><div><br></div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Apr 18, 2014 at 11:50 AM, Kostya Serebryany <span dir="ltr"><<a href="mailto:kcc@google.com" target="_blank">kcc@google.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote"><div><div class="h5">On Fri, Apr 18, 2014 at 11:44 AM, Dmitry Vyukov <span dir="ltr"><<a href="mailto:dvyukov@google.com" target="_blank">dvyukov@google.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><div>On Fri, Apr 18, 2014 at 11:32 AM, Dmitry Vyukov <span dir="ltr"><<a href="mailto:dvyukov@google.com" target="_blank">dvyukov@google.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div class="gmail_extra"><div class="gmail_quote">On Fri, Apr 18, 2014 at 11:13 AM, Dmitry Vyukov <span dir="ltr"><<a href="mailto:dvyukov@google.com" target="_blank">dvyukov@google.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">Hi,<br><br>This is long thread, so I will combine several comments into single email.<div>
<br><br>>> - 8-bit per-thread counters, dumping into central counters on overflow.<br></div><div>>The overflow will happen very quickly with 8bit counter.<br>
<br></div>Yes, but it reduces contention by 256x (a thread must execute at least 256 loop iterations between increments). In practice, if you reduce contention below some threshold, it does not represent a problem anymore.<div>
<br><br>
<br>>> - per-thread counters. Solves the problem at huge cost in RAM per-thread<br></div><div>>It is not practical. Especially for TLS counters -- it creates huge pressure on stack memory.<br><br></div>
Do we have any numbers about number of counters? If there are 100K 1-byte counters, I would consider it as practical.<div><br>
<br> <br>> - self-cooling logarithmic counters: if ((fast_random() % (1 << counter)) == 0) counter++;<br><br></div>This option did not receive any attention. While it has O(1) memory consumption and reduces contention.<br>
Note that there are several tunables here: first -- how implement fast_rand: it can be a per-thread LCG, or per-thread counter or set of counter, or something else; second -- frequency of increment, e.g. it actually can be additive and/or bounded (because if we reduce frequency of increments by 1000x, this must be enough).<div>
<br>
<br><br>> In Google GCC, we implemented another technique which proves to be very effective -- it is called FDO sampling.<br>> Basically counters will be updated every N samples.<br><br></div>How does it work?<div>
<br><br><br>
>> It seems to me like we’re going to have a hard time getting good multithreaded performance without significant impact on the single-threaded behavior.<br></div><div>> I don't really agree.<br><br>
</div>Disagreement seconded.<br>
For single-threaded case we are talking about constant overhead, while for multithreaded case -- about superlinear overhead.<br>Having said that, if there is a *simple* way to modify the final scheme to significantly reduce single-threaded overheads, then or course it's worth doing. But it's not the cornerstone.<div>
<br>
<br><br>> MAX is a fixed cap so even on systems with 100s of cores we don't do something silly.<br><br></div>Makes not sense to me.<br>Why do you want only systems with up to 100 cores to be fast and scalable? 100+ core system *especially* need good scalability (contention tends to be superlinear).<br>
What you are proposing is basically: If I have 10 engineers in my company, I probably want to give them 10 working desks as well. But let's not go insane. If I have 1000 engineers, 100 desks must be enough for them. This must reduce costs.<br>
The baseline memory consumption for systems (and amount of RAM!) is O(NCORES), not O(1). In some read-mostly cases it's possible to achieve O(1) memory consumption, and that's great. But if it's not the case here, let it be so.<div>
<br>
<br><br>> shard_count = std::min(MAX, std::max(NUMBER_OF_THREADS, NUMBER_OF_CORES))<br><br></div>Threads do not produce contention, it's cores that produce contention.<br>The formula must be: shard_count = k*NCORES<br>
And if you want less memory in single-threaded case, then: shard_count = min(k*NCORES, c*NTHREADS)<div><br>
<br><br>>We are talking about developers here. Nobody would know the exact thread counts, but developers know the ballpark number<br><br></div>I strongly believe that we must relief developers from this choice during build time, and do our best to auto-tune (if the final scheme requires tuning).<br>
First, such questions puts unnecessary burden on developers. We don't ask what register allocation algorithm to use for each function, right?<br>Second, there are significant chances we will get a wrong answer, because e.g. developer's view of how threaded is the app can differ from reality or from our classification.<br>
Third, the app can be build by a build engineer; or the procedure can be applied to a base with 10'000 apps; or threaded-ness can change; or the app can work in different modes; or the app can have different phases.<div>
<br>
<br><br>>Another danger involved with dynamically resizing the counter is that it requires a global or per function lock to access the counters. The cost of this can be really high.<span></span><span></span><br>
<div><br></div></div><div>It's possible to do it w/o a mutex, fast-path overhead is only a virtually zero overhead (if implemented properly in the compiler) atomic consume load:</div><div><br></div><div><div>const int maxshard = 4096;</div>
<div>uint64* shard[maxshard];</div><div>atomic<int> shardmask;</div><div><br></div><div>void inline inccounter(int idx)</div><div>{</div><div><span style="white-space:pre-wrap"> </span>int shardidx = gettid() & atomic_load(&shardmask, memory_order_consume);</div>
<div><span style="white-space:pre-wrap"> </span>shard[shardidx][idx]++;</div><div>}</div><div><br></div><div>int pthread_create(...)</div><div>{</div><div><span style="white-space:pre-wrap"> </span>if (updateshardcount()) {</div>
<div><span style="white-space:pre-wrap"> </span>shardlock();</div><div><span style="white-space:pre-wrap"> </span>if (updateshardcount()) {</div><div><span style="white-space:pre-wrap"> </span>int newcount = computeshardcount();</div>
<div><span style="white-space:pre-wrap"> </span>for (int i = oldcount; i < newcount; i++)</div><div><span style="white-space:pre-wrap"> </span>shard[i] = malloc(ncounter*sizeof(uint64));</div><div><span style="white-space:pre-wrap"> </span>atomic_store(&shardmask, newcount-1, memory_order_release);</div>
<div><span style="white-space:pre-wrap"> </span>}</div><div><span style="white-space:pre-wrap"> </span>shardunlock();</div><div><span style="white-space:pre-wrap"> </span>}</div><div><span style="white-space:pre-wrap"> </span>...<span style="white-space:pre-wrap"> </span></div>
<div>}</div></div><div><br></div><div><br></div></div>
</blockquote></div><br></div></div></div><div class="gmail_extra">If we go with this scheme, for tid I would do:</div><div class="gmail_extra"><br></div><div class="gmail_extra">__thread threadid;</div><div class="gmail_extra">
<br></div>
<div class="gmail_extra">int pthread_create(...)</div><div class="gmail_extra">{</div><div class="gmail_extra"> threadid = goohhash(gettid());</div><div class="gmail_extra"> ...<br>}</div><div class="gmail_extra"><br>
</div><div class="gmail_extra">void userfunc()<br>{</div><div class="gmail_extra"> int tid = threadid+1;</div><div class="gmail_extra"> threadid = tid;</div></div></blockquote><div><br></div></div></div><div>Kostya noted that this increment is a bad idea. Because each thread will access all possible cache lines for each counter. I agree, this is a bad idea.</div>
<div><br></div><div>What bothers me is that if we do tid%something, there is non-zero chance that significant amount of active threads will be mapped onto the same shard. And then get back to where we start -- we have significant contention.</div>
<div><br></div><div>Ideally it would be something like:</div><div><br></div><div>on_thread_rescheduling()</div><div>{</div><div> tid = getcpuid();</div><div>}</div><div><br></div><div>but common OSes do not provide necessary interfaces.</div>
</div></div></div></blockquote><div><br></div></div></div><div>Note also: if we store tid in a thread-local variable, we'll either have to load it on every counter increment or keep it in a register,</div><div>which by itself will bring additional ~10% slowdown on x86_64. This is why taking bits from %sp or %fs sounds attractive (it has problems too)</div>
<div class="">
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>
<div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"> // use tid in profiling</div>
<div class="gmail_extra"> ...</div><div class="gmail_extra">}<br><br></div><div class="gmail_extra">This avoids the problem of potentially expensive gettid(), and avoids a possible persistent interference between tids.</div>
<div class="gmail_extra"><br></div></div>
</blockquote></div></div><br></div></div>
</blockquote></div></div><br></div></div>
</blockquote></div><br></div>