[llvm-dev] @llvm.memcpy not honoring volatile?

James Y Knight via llvm-dev llvm-dev at lists.llvm.org
Thu Jun 13 08:08:49 PDT 2019


On Thu, Jun 13, 2019 at 12:54 AM JF Bastien <jfbastien at apple.com> wrote:

>
>
> On Jun 12, 2019, at 9:38 PM, James Y Knight <jyknight at google.com> wrote:
>
> 
> On Tue, Jun 11, 2019 at 12:08 PM JF Bastien via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
>> I think we want option 2.: keep volatile memcpy, and implement it as
>> touching each byte exactly once. That’s unlikely to be particularly useful
>> for every direct-to-hardware uses, but it behaves intuitively enough that I
>> think it’s desirable.
>>
>
> As Eli pointed out, that precludes lowering a volatile memcpy into a call
> the memcpy library function. The usual "memcpy" library function may well
> use the same overlapping-memory trick, and there is no "volatile_memcpy"
> libc function which would provide a guarantee of not touching bytes
> multiple times. Perhaps it's okay to just always emit an inline loop
> instead of falling back to a memcpy call.
>
>
> In which circumstances does this matter?
>

If it's problematic to touch a byte multiple times when emitting inlined
instructions for a "volatile memcpy", surely it's also problematic to emit
a library function call which does the same thing?

But -- I don't know of any realistic circumstance where either one would be
important to actual users. Someone would need to have a situation where
doing 2 overlapping 4-byte writes to implement a 7-byte memcpy is
problematic, but where it doesn't matter to them what permutation of
non-overlapping memory read/write sizes is used -- and furthermore, where
the order doesn't matter. That seems extremely unlikely to ever be the case.

Paul McKenney has a follow on paper (linked from R2 of mine) which
> addresses some of your questions I think. LLVM can do what it wants for now
> since there’s no standard, but there’s likely to be one eventually and we
> probably should match what it’s likely to be.
>

I agree, Paul's paper describes the actually-required (vs
C-standard-required) semantics for volatile loads and stores today -- that
they must use non-tearing operations for sizes/alignments where the
hardware provides such. (IMO, any usage of volatile where that cannot be
done is extremely questionable). Of course, that doesn't say anything about
memcpy, since volatile memcpy isn't part of C, just part of LLVM.

Always a byte-by-byte copy?
>
>

It can.


So, why does llvm even provide a volatile memcpy intrinsic? One possible
answer is that it was needed in order to implement volatile aggregate
copies generated by the C frontend. So, given the real world requirement to
use single instructions where possible...what about this code:

struct X {int n;};
void foo(volatile struct X *n) {
    n[0] = n[1];
}

Clang implements it by creating a volatile llvm.memcpy call. Which
currently is generally lowered as a 32-bit read/write. Maybe it should be
_required_ to always emit a 32-bit read/write instruction, just as if you
were directly operating on a 'volatile int *n'? (Assuming a 32-bit platform
which has such instructions, of course)?

Or -- maybe memcpy is actually not a reasonable thing to use for copying a
volatile struct at all. Perhaps a volatile struct copy should do volatile
element-wise copies of each fundamentally-typed field? That might make some
sense. (But...unions?).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190613/80de44e5/attachment.html>


More information about the llvm-dev mailing list