<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jan 13, 2016 at 7:00 PM, Hans Boehm via llvm-dev <span dir="ltr"><<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">I agree with Tim's assessment for ARM. That's interesting; I wasn't previously aware of that instruction.<div><br></div><div>My understanding is that Alpha would have the same problem for normal loads.<div><br></div><div>I'm all in favor of more systematic handling of the fences associated with x86 non-temporal accesses.</div><div><br></div><div>AFAICT, nontemporal loads and stores seem to have different fencing rules on x86, none of them very clear. Nontemporal stores should probably ideally use an SFENCE. Locked instructions seem to be documented to work with MOVNTDQA. In both cases, there seems to be only empirical evidence as to which side(s) of the nontemporal operations they should go on?</div><div><br></div><div>I finally decided that I was OK with using a LOCKed top-of-stack update as a fence in Java on x86. I'm significantly less enthusiastic for C++. I also think that risks unexpected coherence miss problems, though they would probably be very rare. But they would be very surprising if they did occur.</div></div></div></blockquote><div><br></div><div>Today's LLVM already emits 'lock or %eax, (%esp)' for 'fence seq_cst'/__sync_synchronize/__atomic_thread_fence(__ATOMIC_SEQ_CST) when targeting 32-bit x86 machines which do not support mfence. What instruction sequence should we be using instead?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><br></div><div><br></div></div></div><div class=""><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jan 13, 2016 at 10:59 AM, Tim Northover <span dir="ltr"><<a href="mailto:t.p.northover@gmail.com" target="_blank">t.p.northover@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span>> I haven't touched ARMv8 in a few years so I'm rusty on the non-temporal<br>
> details for that ISA. I lifted this example from here:<br>
><br>
> <a href="http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0024a/CJACGJJF.html" rel="noreferrer" target="_blank">http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0024a/CJACGJJF.html</a><br>
><br>
> Which is correct?<br>
<br>
</span>FWIW, I agree with John here. The example I'd give for the unexpected<br>
behaviour allowed in the spec is:<br>
<br>
.Lwait_for_data:<br>
ldr x0, [x3]<br>
cbz x0, .Lwait_for_data<br>
ldnp x2, x1, [x0]<br>
<br>
where another thread first writes to a buffer then tells us where that<br>
buffer is. For a normal ldp, the address dependency rule means we<br>
don't need a barrier or acquiring load to ensure we see the real data<br>
in the buffer. For ldnp, we would need a barrier to prevent stale<br>
data.<br>
<br>
I suspect this is actually even closer to the x86 situation than what<br>
the guide implies (which looks like a straight-up exposed pipeline to<br>
me, beyond even what Alpha would have done).<br>
<br>
Cheers.<br>
<span><font color="#888888"><br>
Tim.<br>
</font></span></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
<br></blockquote></div><br></div></div>