<div dir="ltr">This is very neat. The concern is that for long running process, the precision loss can be huge. Run to run counter variance can also be very large (without atomic update).<div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div>Moreover, this approach allows to implement the counter increment using a callback: </div>
<div><div><div> if ((fast_thread_local_rand() & counter) == 0) __cov_increment(&counter);</div></div>
</div><div>which in turn will let us use the same hack as in AsanCoverage: use the PC to map the counter to the source code. </div><div>(= no need to create separate section in the objects).</div></div></blockquote><div>
<br></div><div>Similar callback mechanism (GCC based) is used in Google for testing coverage.</div><div><br></div><div>David</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div><br></div><div>Thoughts? </div>
<div><br></div><div>--kcc</div><div><br></div><div><br></div><div><div>% clang++ -O2 -lpthread coverage_mt_vec.cc && time ./a.out </div><div>TIME: real: 0.219; user: 0.430; system: 0.000</div><div>% clang++ -O2 -lpthread -fprofile-instr-generate coverage_mt_vec.cc && time ./a.out </div>
<div>TIME: real: 3.743; user: 7.280; system: 0.000</div><div><br></div><div>% cat coverage_mt_vec.cc</div></div><div><div>#include <pthread.h></div><div>#include <vector></div><div><br></div><div>std::vector<int> v(1000);</div>
<div><br></div><div>__attribute__((noinline)) void foo() { v[0] = 42; }</div><div> </div><div>void *Thread1(void *) {</div><div> for (int i = 0; i < 100000000; i++)</div><div> foo(); </div><div> return 0;</div><div>
} </div><div><br></div><div>__attribute__((noinline)) void bar() { v[999] = 66; }</div><div><br></div><div>void *Thread2(void *) {</div><div> for (int i = 0; i < 100000000; i++)</div><div> bar();</div><div> return 0;</div>
<div>} </div><div> </div><div>int main() {</div><div> static const int kNumThreads = 16;</div><div> pthread_t t[kNumThreads];</div><div> pthread_create(&t[0], 0, Thread1, 0);</div><div> pthread_create(&t[1], 0, Thread2, 0);</div>
<div> pthread_join(t[0], 0);</div><div> pthread_join(t[1], 0);</div><div> return 0;</div><div>}</div></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote"><div><div class="h5">
On Fri, Apr 18, 2014 at 11:45 PM, Xinliang David Li <span dir="ltr"><<a href="mailto:xinliangli@gmail.com" target="_blank">xinliangli@gmail.com</a>></span> wrote:<br>
</div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5"><div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote"><div>On Fri, Apr 18, 2014 at 12:13 AM, Dmitry Vyukov <span dir="ltr"><<a href="mailto:dvyukov@google.com" target="_blank">dvyukov@google.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi,<br><br>This is long thread, so I will combine several comments into single email.<div><br><br>
>> - 8-bit per-thread counters, dumping into central counters on overflow.<br></div><div>>The overflow will happen very quickly with 8bit counter.<br>
<br></div>Yes, but it reduces contention by 256x (a thread must execute at least 256 loop iterations between increments). In practice, if you reduce contention below some threshold, it does not represent a problem anymore.<div>
<br>
<br>
<br>>> - per-thread counters. Solves the problem at huge cost in RAM per-thread<br></div><div>>It is not practical. Especially for TLS counters -- it creates huge pressure on stack memory.<br><br></div>
Do we have any numbers about number of counters? If there are 100K 1-byte counters, I would consider it as practical.<div><br></div></div></blockquote><div><br></div></div><div>A medium sized app I looked at has about 10M counters (arcs only). It is also not uncommon to see such apps running with hundreds of threads.</div>
<div>
<div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div><br>
<br><br>> In Google GCC, we implemented another technique which proves to be very effective -- it is called FDO sampling.<br>> Basically counters will be updated every N samples.<br><br></div>How does it work?</div>
</blockquote><div><br></div></div><div>Similar to how occurrences based PMU sampling work. Setting sampling period to 100 can reduce the instrumentation overhead by close to 100x without introducing much precision loss.</div>
<div>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><br><br><br>
>> It seems to me like we’re going to have a hard time getting good multithreaded performance without significant impact on the single-threaded behavior.<br></div><div>> I don't really agree.<br><br>
</div></div><div><br><div>>We are talking about developers here. Nobody would know the exact thread counts, but developers know the ballpark number<br><br></div></div><div>I strongly believe that we must relief developers from this choice during build time, and do our best to auto-tune (if the final scheme requires tuning).<br>
</div></div></blockquote><div><br></div><div><br></div><div>That really depends. If the tuning space is small, it won't be a problem for the developer/builder. </div><div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
First, such questions puts unnecessary burden on developers. We don't ask what register allocation algorithm to use for each function, right?<br></div></blockquote><div><br></div></div><div>Crazy developers can actually do that via internal options, but this is totally different case. People just needs one flag to turn on/off sharding. When sharding is on, compiler can pick the best 'N' according to some heuristics at compile time.</div>
<div>
<div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Second, there are significant chances we will get a wrong answer, because e.g. developer's view of how threaded is the app can differ from reality or from our classification.<br>
Third, the app can be build by a build engineer; or the procedure can be applied to a base with 10'000 apps; or threaded-ness can change; or the app can work in different modes; or the app can have different phases.<div>
<br></div></div></blockquote><div><br></div></div><div>We have forgotten to mention the benefit of implementation simplicity. If the static/simple solution solves the problem for most of the use cases, designing fancy dynamic solution sounds like over-engineering to me. It (overhead reduction technique) may also get in the way of easier functional enhancement in the future.</div>
<span><font color="#888888">
<div><br></div><div> David</div></font></span></div><br></div></div>
<br></div></div><div class="">_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:LLVMdev@cs.uiuc.edu" target="_blank">LLVMdev@cs.uiuc.edu</a> <a href="http://llvm.cs.uiuc.edu" target="_blank">http://llvm.cs.uiuc.edu</a><br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev</a><br>
<br></div></blockquote></div><br></div>
</blockquote></div><br></div></div>