<div dir="ltr">Thx for the explanation Eli and Tim.<div></div><div>My understanding of volatile was that you may have a different value every time you read and as such overlapping reads may be a bug.<br></div><div><br></div><div>Now, since the behaviour of volatile memcpy is not guaranteed and since <a href="https://godbolt.org/z/CnCOLc">clang does not allow to use it anyways</a> I would like to challenge its existence.</div><div>Is there a know reason for keeping the volatile argument in @llvm.memcpy?</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jun 5, 2019 at 11:28 PM Tim Northover <<a href="mailto:t.p.northover@gmail.com">t.p.northover@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Wed, 5 Jun 2019 at 13:49, Eli Friedman via llvm-dev<br>
<<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:<br>
> I don’t see any particular reason to guarantee that a volatile memcpy will access each byte exactly once. How is that useful?<br>
<br>
I agree it's probably not that useful, but I think the non-duplicating<br>
property of volatile is ingrained strongly enough that viewing a<br>
memcpy as a single load and store to each unit (in an unspecified<br>
order) should be legitimate; so I think this actually is a bug.<br>
<br>
As the documentation says though, it's unwise to depend on the<br>
behaviour of a volatile memcpy.<br>
<br>
Cheers.<br>
<br>
Tim.<br>
</blockquote></div>