[cfe-dev] [RFC] volatile mem* builtins

James Y Knight via cfe-dev cfe-dev at lists.llvm.org
Fri May 8 12:33:18 PDT 2020


IMO, it's quite fine to call this "volatile", and it's actually the
desirable semantics -- even for talking to devices.

E.g. you may want to memcpy data out of a device's buffer, and then do a
volatile write to tell the device you're done with the buffer. Here, you
want the memcpy to be volatile so it cannot be reordered across the "done"
write. But, other than ensuring the correct ordering w.r.t. the subsequent
volatile write, it's quite irrelevant exactly how the data is read from the
buffer -- doing it in the most efficient way is desirable.

All I want is that this specification of what exactly is (and perhaps more
importantly) isn't guaranteed is actually be written down, both in Clang
docs for the builtins, and LLVM langref for the IR intrinsics. Other than
that, this seems like a fine and useful addition.


On Fri, May 8, 2020 at 1:07 PM JF Bastien <jfbastien at apple.com> wrote:

> I indeed think that this is fine for the purpose I’ve stated.
>
> Fundamentally, if you’re interacting with device memory you want to know
> that accesses are performed in a certain order, at a certain size
> granularity. Today’s C and C++ implementations volatile on scalars
> effectively provide this knowledge, even if the Standard doesn’t say so.
> Volatile on anything “large” (more than one or two registers) doesn’t, and
> ought not to be used for device memory with specific semantics.
>
> However, the volatile usage for ToCToU doesn’t care if what you describe
> happens! An adversary can race all they want, as long as any validity check
> occurs after the copy has completed then we’re fine.
>
> Maybe this argues for calling the builtin something else than volatile,
> but still mapping it to volatile memcpy IR for now?
>
>
> On May 7, 2020, at 3:07 PM, James Y Knight via cfe-dev <
> cfe-dev at lists.llvm.org> wrote:
>
> Volatile memcpy/memset/etc are indeed rather underspecified. As LangRef
> states, "If the isvolatile parameter is true, the llvm.memcpy call is a
> volatile operation. The detailed access behavior is not very cleanly
> specified and it is unwise to depend on it."
>
> I think the intent at the moment is that the mem-operation as a whole
> should be treated as volatile -- so the memory copy, as a whole, will
> happen only once -- but that there's no guarantee as to what loads/stores
> are used to implement that, including potentially reading/writing some
> bytes multiple times, with different access sizes. I think those semantics
> make sense, but really ought to be fully nailed down.
>
>
>
>
> On Thu, May 7, 2020 at 5:13 PM Joerg Sonnenberger via cfe-dev <
> cfe-dev at lists.llvm.org> wrote:
>
>> On Wed, May 06, 2020 at 03:40:43PM -0700, JF Bastien via cfe-dev wrote:
>> > I’d like to add volatile overloads to mem* builtins, and authored a
>> patch: https://reviews.llvm.org/D79279 <https://reviews.llvm.org/D79279>
>>
>> The major issue I have here is that it seems to seriously underspecify
>> what it is actually happening with the memory. Consider an
>> implementation of memset for example. It is entirely legal and
>> potentially even useful to check for length being >= two registers and
>> in that case implement it as
>>     write to [start, start+reglen)
>>     write to [start+len-reglen,start+len)
>>     write to (start+reglen)&(reglen-1) in reglen blocks until reaching
>>     the end
>>
>> or variations to try to hide the unaligned head and tail case in the
>> main loop. This would violate the normal assumptions around volatile,
>> e.g. when using device memory.
>>
>> Joerg
>> _______________________________________________
>> cfe-dev mailing list
>> cfe-dev at lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>>
> _______________________________________________
> cfe-dev mailing list
> cfe-dev at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20200508/297b102f/attachment.html>


More information about the cfe-dev mailing list