[llvm-commits] [PATCH] Define 'atomic load' and 'atomic store' as variants of load/store

Jeffrey Yasskin jyasskin at google.com
Fri Aug 5 13:05:42 PDT 2011


On Thu, Aug 4, 2011 at 7:01 PM, Eli Friedman <eli.friedman at gmail.com> wrote:
> Patch attached.
>
> This should be mostly straightforward, but I'd like to check to see if
> anyone has comments on how alignment is handled on atomic loads and
> stores.  Other comments also welcome, of course.
>
> (After this, I get the fun task of going through every call to
> isVolatile() in LLVM.)


>
> Index: include/llvm/Instructions.h
> ===================================================================
> --- include/llvm/Instructions.h	(revision 136929)
> +++ include/llvm/Instructions.h	(working copy)
> @@ -171,11 +179,42 @@
>    /// getAlignment - Return the alignment of the access that is being performed
>    ///
>    unsigned getAlignment() const {
> -    return (1 << (getSubclassDataFromInstruction() >> 1)) >> 1;
> +    return (1 << ((getSubclassDataFromInstruction() >> 1) & 31)) >> 1;

I'd recommend extracting these bit-shifting accessors into helper
functions eventually.

>    }
>
>    void setAlignment(unsigned Align);
>
> +  /// Returns the ordering effect of this fence.

This is a load, not a fence, right?

> +  AtomicOrdering getOrdering() const {
> +    return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
> +  }
> +
> +  /// Set the ordering constraint on this fence.  May only be Acquire, Release,
> +  /// AcquireRelease, or SequentiallyConsistent.

May only be NotAtomic, Unordered, Monotonic, Acquire, or
SequentiallyConsistent, right?

> +  void setOrdering(AtomicOrdering Ordering) {
> +    setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
> +                               (Ordering << 7));
> +  }

> @@ -247,11 +294,42 @@

> +  /// Returns the ordering effect of this fence.

Similarly, not a fence.

> +  AtomicOrdering getOrdering() const {
> +    return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
> +  }
> +
> +  /// Set the ordering constraint on this fence.  May only be Acquire, Release,
> +  /// AcquireRelease, or SequentiallyConsistent.

Stores may be any ordering except Acquire and AcquireRelease.

> +  void setOrdering(AtomicOrdering Ordering) {
> +    setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
> +                               (Ordering << 7));
> +  }
> +
> +  SynchronizationScope getSynchScope() const {
> +    return SynchronizationScope((getSubclassDataFromInstruction() >> 6) & 1);
> +  }
> +
> +  /// Specify whether this fence orders other operations with respect to all

s/fence/store/ here too.

> +  /// concurrently executing threads, or only with respect to signal handlers
> +  /// executing in the same thread.
> +  void setSynchScope(SynchronizationScope xthread) {
> +    setInstructionSubclassData((getSubclassDataFromInstruction() & ~(1 << 6)) |
> +                               (xthread << 6));
> +  }
> +
> Index: docs/LangRef.html
> ===================================================================
> --- docs/LangRef.html	(revision 136929)
> +++ docs/LangRef.html	(working copy)
> @@ -4588,6 +4586,19 @@
>     number or order of execution of this <tt>load</tt> with other <a
>     href="#volatile">volatile operations</a>.</p>
>
> +<p>If the <code>load</code> is marked as <code>atomic</code>, it takes an extra
> +   <a href="#ordering">ordering</a> and optional <code>singlethread</code>
> +   argument.  The <code>release</code> and <code>acq_rel</code> orderings are
> +   not valid on <code>load</code> instructions.  Atomic loads produce <a
> +   href="#memorymodel">defined</a> results when they may see multiple atomic
> +   stores.  The type of the pointee must be an integer type whose bit width
> +   is a power of two greater than or equal to eight and less than or equal
> +   to a target-specific size limit. <code>align</code> must be explicitly
> +   specified on atomic loads, and the load has undefined behavior if the
> +   alignment is not set to a value which is at least the size in bytes of
> +   the pointee. <code>!nontemporal</code> does not have any defined semantics
> +   for atomic loads.</p>

Given that "A value of 0 or an omitted align argument means that the
operation has the preferential alignment for the target", I'm not sure
you need to require an explicit alignment for atomic loads and stores.
It'd make sense to me to just say that an alignment less than the
width of the type has undefined behavior.

But I don't have strong opinions about this.

>  <p>The optional constant <tt>align</tt> argument specifies the alignment of the
>     operation (that is, the alignment of the memory address). A value of 0 or an
>     omitted <tt>align</tt> argument means that the operation has the preferential
> @@ -4648,6 +4659,19 @@
>     order of execution of this <tt>store</tt> with other <a
>     href="#volatile">volatile operations</a>.</p>
>
> +<p>If the <code>store</code> is marked as <code>atomic</code>, it takes an extra
> +   <a href="#ordering">ordering</a> and optional <code>singlethread</code>
> +   argument.  The <code>acquire</code> and <code>acq_rel</code> orderings aren't
> +   valid on <code>store</code> instructions.  Atomic loads produce <a
> +   href="#memorymodel">defined</a> results when they may see multiple atomic
> +   stores. The type of the pointee must be an integer type whose bit width
> +   is a power of two greater than or equal to eight and less than or equal
> +   to a target-specific size limit. <code>align</code> must be explicitly
> +   specified on atomic loads, and the load has undefined behavior if the

s/load/store/

> +   alignment is not set to a value which is at least the size in bytes of
> +   the pointee. <code>!nontemporal</code> does not have any defined semantics
> +   for atomic stores.</p>
> +
>  <p>The optional constant "align" argument specifies the alignment of the
>     operation (that is, the alignment of the memory address). A value of 0 or an
>     omitted "align" argument means that the operation has the preferential
> Index: lib/VMCore/AsmWriter.cpp
> ===================================================================
> --- lib/VMCore/AsmWriter.cpp	(revision 136929)
> +++ lib/VMCore/AsmWriter.cpp	(working copy)
> @@ -1660,13 +1660,16 @@
>    }
>
>    // If this is a volatile load or store, print out the volatile marker.

Copy this for the atomic marker.

> +  if ((isa<LoadInst>(I)  && cast<LoadInst>(I).isAtomic()) ||
> +      (isa<StoreInst>(I) && cast<StoreInst>(I).isAtomic()))
> +    Out << "atomic ";
> +
>    if ((isa<LoadInst>(I)  && cast<LoadInst>(I).isVolatile()) ||
> -      (isa<StoreInst>(I) && cast<StoreInst>(I).isVolatile())) {
> -      Out << "volatile ";
> -  } else if (isa<CallInst>(I) && cast<CallInst>(I).isTailCall()) {
> -    // If this is a call, check if it's a tail call.
> +      (isa<StoreInst>(I) && cast<StoreInst>(I).isVolatile()))
> +    Out << "volatile ";
> +

Otherwise, looks good to me.




More information about the llvm-commits mailing list