<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Sep 21, 2015 at 3:35 PM, Sean Silva <span dir="ltr"><<a href="mailto:chisophugis@gmail.com" target="_blank">chisophugis@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span>On Mon, Sep 21, 2015 at 8:20 AM, Rui Ueyama <span dir="ltr"><<a href="mailto:ruiu@google.com" target="_blank">ruiu@google.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><span><div class="gmail_quote">On Sun, Sep 20, 2015 at 6:42 PM, Rui Ueyama <span dir="ltr"><<a href="mailto:ruiu@google.com" target="_blank">ruiu@google.com</a>></span> wrote:</div></span><div class="gmail_quote"><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><p dir="ltr">We can, but I think we don't have to, because I didn't see any measurable difference between this code and the old racy one.</p></blockquote></span><div>... in terms of speed.</div></div></div></div></blockquote><div><br></div></span><div>On x86 at a hardware level a similar amount of synchronization is required for both cases, so this isn't surprising (regardless of whether the access is atomic or not, the cache coherence mechanisms kick in). Simply replacing a non-atomic access with an atomic access is not enough. From inspection of the code, there seems to be quite a bit of work being done between increments of this value, so it's probably not going to matter, but if you someday change this to use fine-grained parallelism so that it scales linearly with # cores (it is highly parallelizable, so this should be possible), it will probably be necessary to factor this in.</div><div><br></div><div>For reference, I have attached a simple test program that shows the relative impact of the different choices.</div><div><br></div><div><p style="margin:0px;font-size:12px;font-family:Menlo"><span style="color:rgb(52,187,199)">Sean</span>:~/tmp <span style="color:rgb(83,48,225)">%</span> clang++ -std=c++11 -DNUM_THREADS=12 -DNUM_OPS_PER_THREAD=5000000 -O3 testsynchronization.cpp && ./a.out</p>
<p style="margin:0px;font-size:12px;font-family:Menlo">Per thread, non-atomic: 1.00x (baseline)</p>
<p style="margin:0px;font-size:12px;font-family:Menlo">Per-thread, atomic: 3.61x slower than baseline</p>
<p style="margin:0px;font-size:12px;font-family:Menlo">Shared, non-atomic: 29.47x slower than baseline</p>
<p style="margin:0px;font-size:12px;font-family:Menlo">Shared, atomic: 112.72x slower than baseline</p></div><div><br></div><div>This is on a Xeon E5-2697 v2 (12 physical cores).</div><div><br></div><div>As you can see, the effect of making the access non-atomic is very small compared to the effect of avoiding cache coherence overhead (3-4x vs. 30x).</div></div></div></div></blockquote><div><br></div><div>Thank you for the interesting numbers. Shared atomic variable are indeed much slower than non-shared, non-atomic variables when you have a lot of concurrent writes. If I'm really trying to get until the last drop from multi-cores, I'd do as you suggested so that threads share almost nothing. I didn't do that simply because the cost of updating the atomic counter is in practice negligible in this code because we are doing so many things other than updating the atomic variable. I don't generally optimize unless doing that makes measurable performance difference.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div>CC'ing Dmitry who may have more insight to give.</div><span><font color="#888888"><div><br></div><div>-- Sean Silva</div></font></span><div><div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class="gmail_quote">2015/09/20 18:32 "Sean Silva" <<a href="mailto:chisophugis@gmail.com" target="_blank">chisophugis@gmail.com</a>>:<div><div><br type="attribution"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">It seems unfortunate for us to need any synchronization at all just to generate a unique ID. Can we just have a bunch of counters, one per thread, then initialize them to `ThreadIndex << 50` or something? (make sure they are each on their own cache line)<div><br></div><div>-- Sean Silva</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Sep 19, 2015 at 6:44 PM, Rui Ueyama via llvm-commits <span dir="ltr"><<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Author: ruiu<br>
Date: Sat Sep 19 20:44:44 2015<br>
New Revision: 248106<br>
<br>
URL: <a href="http://llvm.org/viewvc/llvm-project?rev=248106&view=rev" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project?rev=248106&view=rev</a><br>
Log:<br>
COFF: Fix race condition.<br>
<br>
NextID is updated inside parallel_for_each, so it needs mutual exclusion.<br>
<br>
Modified:<br>
lld/trunk/COFF/ICF.cpp<br>
<br>
Modified: lld/trunk/COFF/ICF.cpp<br>
URL: <a href="http://llvm.org/viewvc/llvm-project/lld/trunk/COFF/ICF.cpp?rev=248106&r1=248105&r2=248106&view=diff" rel="noreferrer" target="_blank">http://llvm.org/viewvc/llvm-project/lld/trunk/COFF/ICF.cpp?rev=248106&r1=248105&r2=248106&view=diff</a><br>
==============================================================================<br>
--- lld/trunk/COFF/ICF.cpp (original)<br>
+++ lld/trunk/COFF/ICF.cpp Sat Sep 19 20:44:44 2015<br>
@@ -73,7 +73,7 @@ private:<br>
bool forEachGroup(std::vector<SectionChunk *> &Chunks, Comparator Eq);<br>
bool partition(ChunkIterator Begin, ChunkIterator End, Comparator Eq);<br>
<br>
- uint64_t NextID = 1;<br>
+ std::atomic<uint64_t> NextID = { 1 };<br>
};<br>
<br>
// Entry point to ICF.<br>
<br>
<br>
_______________________________________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@lists.llvm.org" target="_blank">llvm-commits@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits</a><br>
</blockquote></div><br></div>
</blockquote></div></div></div>
</blockquote></div></div></div><br></div></div>
</blockquote></div></div></div><br></div></div>
</blockquote></div><br></div></div>