[llvm] r307722 - Enhance synchscope representation

Konstantin Zhuravlyov via llvm-commits llvm-commits at lists.llvm.org
Tue Jul 11 15:23:01 PDT 2017


Author: kzhuravl
Date: Tue Jul 11 15:23:00 2017
New Revision: 307722

URL: http://llvm.org/viewvc/llvm-project?rev=307722&view=rev
Log:
Enhance synchscope representation

  OpenCL 2.0 introduces the notion of memory scopes in atomic operations to
  global and local memory. These scopes restrict how synchronization is
  achieved, which can result in improved performance.

  This change extends existing notion of synchronization scopes in LLVM to
  support arbitrary scopes expressed as target-specific strings, in addition to
  the already defined scopes (single thread, system).

  The LLVM IR and MIR syntax for expressing synchronization scopes has changed
  to use *syncscope("<scope>")*, where <scope> can be "singlethread" (this
  replaces *singlethread* keyword), or a target-specific name. As before, if
  the scope is not specified, it defaults to CrossThread/System scope.

  Implementation details:
    - Mapping from synchronization scope name/string to synchronization scope id
      is stored in LLVM context;
    - CrossThread/System and SingleThread scopes are pre-defined to efficiently
      check for known scopes without comparing strings;
    - Synchronization scope names are stored in SYNC_SCOPE_NAMES_BLOCK in
      the bitcode.

Differential Revision: https://reviews.llvm.org/D21723


Added:
    llvm/trunk/test/Bitcode/atomic-no-syncscope.ll
    llvm/trunk/test/Bitcode/atomic-no-syncscope.ll.bc   (with props)
    llvm/trunk/test/CodeGen/AMDGPU/syncscopes.ll
    llvm/trunk/test/CodeGen/MIR/AMDGPU/syncscopes.mir
    llvm/trunk/test/Linker/Inputs/syncscope-1.ll
    llvm/trunk/test/Linker/Inputs/syncscope-2.ll
    llvm/trunk/test/Linker/syncscopes.ll
Modified:
    llvm/trunk/docs/LangRef.rst
    llvm/trunk/include/llvm/Bitcode/LLVMBitCodes.h
    llvm/trunk/include/llvm/CodeGen/MachineFunction.h
    llvm/trunk/include/llvm/CodeGen/MachineMemOperand.h
    llvm/trunk/include/llvm/CodeGen/SelectionDAG.h
    llvm/trunk/include/llvm/CodeGen/SelectionDAGNodes.h
    llvm/trunk/include/llvm/IR/IRBuilder.h
    llvm/trunk/include/llvm/IR/Instructions.h
    llvm/trunk/include/llvm/IR/LLVMContext.h
    llvm/trunk/lib/AsmParser/LLLexer.cpp
    llvm/trunk/lib/AsmParser/LLParser.cpp
    llvm/trunk/lib/AsmParser/LLParser.h
    llvm/trunk/lib/AsmParser/LLToken.h
    llvm/trunk/lib/Bitcode/Reader/BitcodeReader.cpp
    llvm/trunk/lib/Bitcode/Writer/BitcodeWriter.cpp
    llvm/trunk/lib/CodeGen/AtomicExpandPass.cpp
    llvm/trunk/lib/CodeGen/GlobalISel/IRTranslator.cpp
    llvm/trunk/lib/CodeGen/MIRParser/MILexer.cpp
    llvm/trunk/lib/CodeGen/MIRParser/MILexer.h
    llvm/trunk/lib/CodeGen/MIRParser/MIParser.cpp
    llvm/trunk/lib/CodeGen/MIRPrinter.cpp
    llvm/trunk/lib/CodeGen/MachineFunction.cpp
    llvm/trunk/lib/CodeGen/MachineInstr.cpp
    llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
    llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
    llvm/trunk/lib/IR/AsmWriter.cpp
    llvm/trunk/lib/IR/Core.cpp
    llvm/trunk/lib/IR/Instruction.cpp
    llvm/trunk/lib/IR/Instructions.cpp
    llvm/trunk/lib/IR/LLVMContext.cpp
    llvm/trunk/lib/IR/LLVMContextImpl.cpp
    llvm/trunk/lib/IR/LLVMContextImpl.h
    llvm/trunk/lib/IR/Verifier.cpp
    llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp
    llvm/trunk/lib/Target/SystemZ/SystemZISelLowering.cpp
    llvm/trunk/lib/Target/X86/X86ISelLowering.cpp
    llvm/trunk/lib/Transforms/IPO/GlobalOpt.cpp
    llvm/trunk/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
    llvm/trunk/lib/Transforms/Instrumentation/ThreadSanitizer.cpp
    llvm/trunk/lib/Transforms/Scalar/GVN.cpp
    llvm/trunk/lib/Transforms/Scalar/JumpThreading.cpp
    llvm/trunk/lib/Transforms/Scalar/SROA.cpp
    llvm/trunk/lib/Transforms/Utils/FunctionComparator.cpp
    llvm/trunk/test/Assembler/atomic.ll
    llvm/trunk/test/Bitcode/atomic.ll
    llvm/trunk/test/Bitcode/compatibility-3.6.ll
    llvm/trunk/test/Bitcode/compatibility-3.7.ll
    llvm/trunk/test/Bitcode/compatibility-3.8.ll
    llvm/trunk/test/Bitcode/compatibility-3.9.ll
    llvm/trunk/test/Bitcode/compatibility-4.0.ll
    llvm/trunk/test/Bitcode/compatibility.ll
    llvm/trunk/test/Bitcode/memInstructions.3.2.ll
    llvm/trunk/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
    llvm/trunk/test/CodeGen/AArch64/fence-singlethread.ll
    llvm/trunk/test/CodeGen/ARM/fence-singlethread.ll
    llvm/trunk/test/CodeGen/MIR/AArch64/atomic-memoperands.mir
    llvm/trunk/test/CodeGen/PowerPC/atomics-regression.ll
    llvm/trunk/test/Instrumentation/ThreadSanitizer/atomic.ll
    llvm/trunk/test/Transforms/GVN/PRE/atomic.ll
    llvm/trunk/test/Transforms/InstCombine/consecutive-fences.ll
    llvm/trunk/test/Transforms/Sink/fence.ll
    llvm/trunk/unittests/Analysis/AliasAnalysisTest.cpp

Modified: llvm/trunk/docs/LangRef.rst
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/docs/LangRef.rst?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/docs/LangRef.rst (original)
+++ llvm/trunk/docs/LangRef.rst Tue Jul 11 15:23:00 2017
@@ -2209,12 +2209,21 @@ For a simpler introduction to the orderi
     same address in this global order. This corresponds to the C++0x/C1x
     ``memory_order_seq_cst`` and Java volatile.
 
-.. _singlethread:
+.. _syncscope:
 
-If an atomic operation is marked ``singlethread``, it only *synchronizes
-with* or participates in modification and seq\_cst total orderings with
-other operations running in the same thread (for example, in signal
-handlers).
+If an atomic operation is marked ``syncscope("singlethread")``, it only
+*synchronizes with* and only participates in the seq\_cst total orderings of
+other operations running in the same thread (for example, in signal handlers).
+
+If an atomic operation is marked ``syncscope("<target-scope>")``, where
+``<target-scope>`` is a target specific synchronization scope, then it is target
+dependent if it *synchronizes with* and participates in the seq\_cst total
+orderings of other operations.
+
+Otherwise, an atomic operation that is not marked ``syncscope("singlethread")``
+or ``syncscope("<target-scope>")`` *synchronizes with* and participates in the
+seq\_cst total orderings of other operations that are not marked
+``syncscope("singlethread")`` or ``syncscope("<target-scope>")``.
 
 .. _fastmath:
 
@@ -7380,7 +7389,7 @@ Syntax:
 ::
 
       <result> = load [volatile] <ty>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>][, !invariant.load !<index>][, !invariant.group !<index>][, !nonnull !<index>][, !dereferenceable !<deref_bytes_node>][, !dereferenceable_or_null !<deref_bytes_node>][, !align !<align_node>]
-      <result> = load atomic [volatile] <ty>, <ty>* <pointer> [singlethread] <ordering>, align <alignment> [, !invariant.group !<index>]
+      <result> = load atomic [volatile] <ty>, <ty>* <pointer> [syncscope("<target-scope>")] <ordering>, align <alignment> [, !invariant.group !<index>]
       !<index> = !{ i32 1 }
       !<deref_bytes_node> = !{i64 <dereferenceable_bytes>}
       !<align_node> = !{ i64 <value_alignment> }
@@ -7401,14 +7410,14 @@ modify the number or order of execution
 :ref:`volatile operations <volatile>`.
 
 If the ``load`` is marked as ``atomic``, it takes an extra :ref:`ordering
-<ordering>` and optional ``singlethread`` argument. The ``release`` and
-``acq_rel`` orderings are not valid on ``load`` instructions. Atomic loads
-produce :ref:`defined <memmodel>` results when they may see multiple atomic
-stores. The type of the pointee must be an integer, pointer, or floating-point
-type whose bit width is a power of two greater than or equal to eight and less
-than or equal to a target-specific size limit.  ``align`` must be explicitly
-specified on atomic loads, and the load has undefined behavior if the alignment
-is not set to a value which is at least the size in bytes of the
+<ordering>` and optional ``syncscope("<target-scope>")`` argument. The
+``release`` and ``acq_rel`` orderings are not valid on ``load`` instructions.
+Atomic loads produce :ref:`defined <memmodel>` results when they may see
+multiple atomic stores. The type of the pointee must be an integer, pointer, or
+floating-point type whose bit width is a power of two greater than or equal to
+eight and less than or equal to a target-specific size limit.  ``align`` must be
+explicitly specified on atomic loads, and the load has undefined behavior if the
+alignment is not set to a value which is at least the size in bytes of the
 pointee. ``!nontemporal`` does not have any defined semantics for atomic loads.
 
 The optional constant ``align`` argument specifies the alignment of the
@@ -7509,7 +7518,7 @@ Syntax:
 ::
 
       store [volatile] <ty> <value>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>][, !invariant.group !<index>]        ; yields void
-      store atomic [volatile] <ty> <value>, <ty>* <pointer> [singlethread] <ordering>, align <alignment> [, !invariant.group !<index>] ; yields void
+      store atomic [volatile] <ty> <value>, <ty>* <pointer> [syncscope("<target-scope>")] <ordering>, align <alignment> [, !invariant.group !<index>] ; yields void
 
 Overview:
 """""""""
@@ -7529,14 +7538,14 @@ allowed to modify the number or order of
 structural type <t_opaque>`) can be stored.
 
 If the ``store`` is marked as ``atomic``, it takes an extra :ref:`ordering
-<ordering>` and optional ``singlethread`` argument. The ``acquire`` and
-``acq_rel`` orderings aren't valid on ``store`` instructions. Atomic loads
-produce :ref:`defined <memmodel>` results when they may see multiple atomic
-stores. The type of the pointee must be an integer, pointer, or floating-point
-type whose bit width is a power of two greater than or equal to eight and less
-than or equal to a target-specific size limit.  ``align`` must be explicitly
-specified on atomic stores, and the store has undefined behavior if the
-alignment is not set to a value which is at least the size in bytes of the
+<ordering>` and optional ``syncscope("<target-scope>")`` argument. The
+``acquire`` and ``acq_rel`` orderings aren't valid on ``store`` instructions.
+Atomic loads produce :ref:`defined <memmodel>` results when they may see
+multiple atomic stores. The type of the pointee must be an integer, pointer, or
+floating-point type whose bit width is a power of two greater than or equal to
+eight and less than or equal to a target-specific size limit.  ``align`` must be
+explicitly specified on atomic stores, and the store has undefined behavior if
+the alignment is not set to a value which is at least the size in bytes of the
 pointee. ``!nontemporal`` does not have any defined semantics for atomic stores.
 
 The optional constant ``align`` argument specifies the alignment of the
@@ -7597,7 +7606,7 @@ Syntax:
 
 ::
 
-      fence [singlethread] <ordering>                   ; yields void
+      fence [syncscope("<target-scope>")] <ordering>  ; yields void
 
 Overview:
 """""""""
@@ -7631,17 +7640,17 @@ A ``fence`` which has ``seq_cst`` orderi
 ``acquire`` and ``release`` semantics specified above, participates in
 the global program order of other ``seq_cst`` operations and/or fences.
 
-The optional ":ref:`singlethread <singlethread>`" argument specifies
-that the fence only synchronizes with other fences in the same thread.
-(This is useful for interacting with signal handlers.)
+A ``fence`` instruction can also take an optional
+":ref:`syncscope <syncscope>`" argument.
 
 Example:
 """"""""
 
 .. code-block:: llvm
 
-      fence acquire                          ; yields void
-      fence singlethread seq_cst             ; yields void
+      fence acquire                                        ; yields void
+      fence syncscope("singlethread") seq_cst              ; yields void
+      fence syncscope("agent") seq_cst                     ; yields void
 
 .. _i_cmpxchg:
 
@@ -7653,7 +7662,7 @@ Syntax:
 
 ::
 
-      cmpxchg [weak] [volatile] <ty>* <pointer>, <ty> <cmp>, <ty> <new> [singlethread] <success ordering> <failure ordering> ; yields  { ty, i1 }
+      cmpxchg [weak] [volatile] <ty>* <pointer>, <ty> <cmp>, <ty> <new> [syncscope("<target-scope>")] <success ordering> <failure ordering> ; yields  { ty, i1 }
 
 Overview:
 """""""""
@@ -7682,10 +7691,8 @@ must be at least ``monotonic``, the orde
 stronger than that on success, and the failure ordering cannot be either
 ``release`` or ``acq_rel``.
 
-The optional "``singlethread``" argument declares that the ``cmpxchg``
-is only atomic with respect to code (usually signal handlers) running in
-the same thread as the ``cmpxchg``. Otherwise the cmpxchg is atomic with
-respect to all other code in the system.
+A ``cmpxchg`` instruction can also take an optional
+":ref:`syncscope <syncscope>`" argument.
 
 The pointer passed into cmpxchg must have alignment greater than or
 equal to the size in memory of the operand.
@@ -7739,7 +7746,7 @@ Syntax:
 
 ::
 
-      atomicrmw [volatile] <operation> <ty>* <pointer>, <ty> <value> [singlethread] <ordering>                   ; yields ty
+      atomicrmw [volatile] <operation> <ty>* <pointer>, <ty> <value> [syncscope("<target-scope>")] <ordering>                   ; yields ty
 
 Overview:
 """""""""
@@ -7773,6 +7780,9 @@ be a pointer to that type. If the ``atom
 order of execution of this ``atomicrmw`` with other :ref:`volatile
 operations <volatile>`.
 
+A ``atomicrmw`` instruction can also take an optional
+":ref:`syncscope <syncscope>`" argument.
+
 Semantics:
 """"""""""
 

Modified: llvm/trunk/include/llvm/Bitcode/LLVMBitCodes.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Bitcode/LLVMBitCodes.h?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/include/llvm/Bitcode/LLVMBitCodes.h (original)
+++ llvm/trunk/include/llvm/Bitcode/LLVMBitCodes.h Tue Jul 11 15:23:00 2017
@@ -59,6 +59,8 @@ enum BlockIDs {
   FULL_LTO_GLOBALVAL_SUMMARY_BLOCK_ID,
 
   SYMTAB_BLOCK_ID,
+
+  SYNC_SCOPE_NAMES_BLOCK_ID,
 };
 
 /// Identification block contains a string that describes the producer details,
@@ -172,6 +174,10 @@ enum OperandBundleTagCode {
   OPERAND_BUNDLE_TAG = 1, // TAG: [strchr x N]
 };
 
+enum SyncScopeNameCode {
+  SYNC_SCOPE_NAME = 1,
+};
+
 // Value symbol table codes.
 enum ValueSymtabCodes {
   VST_CODE_ENTRY = 1,   // VST_ENTRY: [valueid, namechar x N]
@@ -404,12 +410,6 @@ enum AtomicOrderingCodes {
   ORDERING_SEQCST = 6
 };
 
-/// Encoded SynchronizationScope values.
-enum AtomicSynchScopeCodes {
-  SYNCHSCOPE_SINGLETHREAD = 0,
-  SYNCHSCOPE_CROSSTHREAD = 1
-};
-
 /// Markers and flags for call instruction.
 enum CallMarkersFlags {
   CALL_TAIL = 0,

Modified: llvm/trunk/include/llvm/CodeGen/MachineFunction.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/CodeGen/MachineFunction.h?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/include/llvm/CodeGen/MachineFunction.h (original)
+++ llvm/trunk/include/llvm/CodeGen/MachineFunction.h Tue Jul 11 15:23:00 2017
@@ -650,7 +650,7 @@ public:
       MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s,
       unsigned base_alignment, const AAMDNodes &AAInfo = AAMDNodes(),
       const MDNode *Ranges = nullptr,
-      SynchronizationScope SynchScope = CrossThread,
+      SyncScope::ID SSID = SyncScope::System,
       AtomicOrdering Ordering = AtomicOrdering::NotAtomic,
       AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic);
 

Modified: llvm/trunk/include/llvm/CodeGen/MachineMemOperand.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/CodeGen/MachineMemOperand.h?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/include/llvm/CodeGen/MachineMemOperand.h (original)
+++ llvm/trunk/include/llvm/CodeGen/MachineMemOperand.h Tue Jul 11 15:23:00 2017
@@ -124,8 +124,8 @@ public:
 private:
   /// Atomic information for this memory operation.
   struct MachineAtomicInfo {
-    /// Synchronization scope for this memory operation.
-    unsigned SynchScope : 1;      // enum SynchronizationScope
+    /// Synchronization scope ID for this memory operation.
+    unsigned SSID : 8;            // SyncScope::ID
     /// Atomic ordering requirements for this memory operation. For cmpxchg
     /// atomic operations, atomic ordering requirements when store occurs.
     unsigned Ordering : 4;        // enum AtomicOrdering
@@ -152,7 +152,7 @@ public:
                     unsigned base_alignment,
                     const AAMDNodes &AAInfo = AAMDNodes(),
                     const MDNode *Ranges = nullptr,
-                    SynchronizationScope SynchScope = CrossThread,
+                    SyncScope::ID SSID = SyncScope::System,
                     AtomicOrdering Ordering = AtomicOrdering::NotAtomic,
                     AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic);
 
@@ -202,9 +202,9 @@ public:
   /// Return the range tag for the memory reference.
   const MDNode *getRanges() const { return Ranges; }
 
-  /// Return the synchronization scope for this memory operation.
-  SynchronizationScope getSynchScope() const {
-    return static_cast<SynchronizationScope>(AtomicInfo.SynchScope);
+  /// Returns the synchronization scope ID for this memory operation.
+  SyncScope::ID getSyncScopeID() const {
+    return static_cast<SyncScope::ID>(AtomicInfo.SSID);
   }
 
   /// Return the atomic ordering requirements for this memory operation. For

Modified: llvm/trunk/include/llvm/CodeGen/SelectionDAG.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/CodeGen/SelectionDAG.h?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/include/llvm/CodeGen/SelectionDAG.h (original)
+++ llvm/trunk/include/llvm/CodeGen/SelectionDAG.h Tue Jul 11 15:23:00 2017
@@ -927,7 +927,7 @@ public:
                            SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo,
                            unsigned Alignment, AtomicOrdering SuccessOrdering,
                            AtomicOrdering FailureOrdering,
-                           SynchronizationScope SynchScope);
+                           SyncScope::ID SSID);
   SDValue getAtomicCmpSwap(unsigned Opcode, const SDLoc &dl, EVT MemVT,
                            SDVTList VTs, SDValue Chain, SDValue Ptr,
                            SDValue Cmp, SDValue Swp, MachineMemOperand *MMO);
@@ -937,7 +937,7 @@ public:
   SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain,
                     SDValue Ptr, SDValue Val, const Value *PtrVal,
                     unsigned Alignment, AtomicOrdering Ordering,
-                    SynchronizationScope SynchScope);
+                    SyncScope::ID SSID);
   SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain,
                     SDValue Ptr, SDValue Val, MachineMemOperand *MMO);
 

Modified: llvm/trunk/include/llvm/CodeGen/SelectionDAGNodes.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/CodeGen/SelectionDAGNodes.h?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/include/llvm/CodeGen/SelectionDAGNodes.h (original)
+++ llvm/trunk/include/llvm/CodeGen/SelectionDAGNodes.h Tue Jul 11 15:23:00 2017
@@ -1213,8 +1213,8 @@ public:
   /// Returns the Ranges that describes the dereference.
   const MDNode *getRanges() const { return MMO->getRanges(); }
 
-  /// Return the synchronization scope for this memory operation.
-  SynchronizationScope getSynchScope() const { return MMO->getSynchScope(); }
+  /// Returns the synchronization scope ID for this memory operation.
+  SyncScope::ID getSyncScopeID() const { return MMO->getSyncScopeID(); }
 
   /// Return the atomic ordering requirements for this memory operation. For
   /// cmpxchg atomic operations, return the atomic ordering requirements when

Modified: llvm/trunk/include/llvm/IR/IRBuilder.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/IR/IRBuilder.h?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/include/llvm/IR/IRBuilder.h (original)
+++ llvm/trunk/include/llvm/IR/IRBuilder.h Tue Jul 11 15:23:00 2017
@@ -1203,22 +1203,22 @@ public:
     return SI;
   }
   FenceInst *CreateFence(AtomicOrdering Ordering,
-                         SynchronizationScope SynchScope = CrossThread,
+                         SyncScope::ID SSID = SyncScope::System,
                          const Twine &Name = "") {
-    return Insert(new FenceInst(Context, Ordering, SynchScope), Name);
+    return Insert(new FenceInst(Context, Ordering, SSID), Name);
   }
   AtomicCmpXchgInst *
   CreateAtomicCmpXchg(Value *Ptr, Value *Cmp, Value *New,
                       AtomicOrdering SuccessOrdering,
                       AtomicOrdering FailureOrdering,
-                      SynchronizationScope SynchScope = CrossThread) {
+                      SyncScope::ID SSID = SyncScope::System) {
     return Insert(new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering,
-                                        FailureOrdering, SynchScope));
+                                        FailureOrdering, SSID));
   }
   AtomicRMWInst *CreateAtomicRMW(AtomicRMWInst::BinOp Op, Value *Ptr, Value *Val,
                                  AtomicOrdering Ordering,
-                               SynchronizationScope SynchScope = CrossThread) {
-    return Insert(new AtomicRMWInst(Op, Ptr, Val, Ordering, SynchScope));
+                                 SyncScope::ID SSID = SyncScope::System) {
+    return Insert(new AtomicRMWInst(Op, Ptr, Val, Ordering, SSID));
   }
   Value *CreateGEP(Value *Ptr, ArrayRef<Value *> IdxList,
                    const Twine &Name = "") {

Modified: llvm/trunk/include/llvm/IR/Instructions.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/IR/Instructions.h?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/include/llvm/IR/Instructions.h (original)
+++ llvm/trunk/include/llvm/IR/Instructions.h Tue Jul 11 15:23:00 2017
@@ -52,11 +52,6 @@ class ConstantInt;
 class DataLayout;
 class LLVMContext;
 
-enum SynchronizationScope {
-  SingleThread = 0,
-  CrossThread = 1
-};
-
 //===----------------------------------------------------------------------===//
 //                                AllocaInst Class
 //===----------------------------------------------------------------------===//
@@ -195,17 +190,16 @@ public:
   LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile,
            unsigned Align, BasicBlock *InsertAtEnd);
   LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile, unsigned Align,
-           AtomicOrdering Order, SynchronizationScope SynchScope = CrossThread,
+           AtomicOrdering Order, SyncScope::ID SSID = SyncScope::System,
            Instruction *InsertBefore = nullptr)
       : LoadInst(cast<PointerType>(Ptr->getType())->getElementType(), Ptr,
-                 NameStr, isVolatile, Align, Order, SynchScope, InsertBefore) {}
+                 NameStr, isVolatile, Align, Order, SSID, InsertBefore) {}
   LoadInst(Type *Ty, Value *Ptr, const Twine &NameStr, bool isVolatile,
            unsigned Align, AtomicOrdering Order,
-           SynchronizationScope SynchScope = CrossThread,
+           SyncScope::ID SSID = SyncScope::System,
            Instruction *InsertBefore = nullptr);
   LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile,
-           unsigned Align, AtomicOrdering Order,
-           SynchronizationScope SynchScope,
+           unsigned Align, AtomicOrdering Order, SyncScope::ID SSID,
            BasicBlock *InsertAtEnd);
   LoadInst(Value *Ptr, const char *NameStr, Instruction *InsertBefore);
   LoadInst(Value *Ptr, const char *NameStr, BasicBlock *InsertAtEnd);
@@ -235,34 +229,34 @@ public:
 
   void setAlignment(unsigned Align);
 
-  /// Returns the ordering effect of this fence.
+  /// Returns the ordering constraint of this load instruction.
   AtomicOrdering getOrdering() const {
     return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
   }
 
-  /// Set the ordering constraint on this load. May not be Release or
-  /// AcquireRelease.
+  /// Sets the ordering constraint of this load instruction.  May not be Release
+  /// or AcquireRelease.
   void setOrdering(AtomicOrdering Ordering) {
     setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
                                ((unsigned)Ordering << 7));
   }
 
-  SynchronizationScope getSynchScope() const {
-    return SynchronizationScope((getSubclassDataFromInstruction() >> 6) & 1);
+  /// Returns the synchronization scope ID of this load instruction.
+  SyncScope::ID getSyncScopeID() const {
+    return SSID;
   }
 
-  /// Specify whether this load is ordered with respect to all
-  /// concurrently executing threads, or only with respect to signal handlers
-  /// executing in the same thread.
-  void setSynchScope(SynchronizationScope xthread) {
-    setInstructionSubclassData((getSubclassDataFromInstruction() & ~(1 << 6)) |
-                               (xthread << 6));
+  /// Sets the synchronization scope ID of this load instruction.
+  void setSyncScopeID(SyncScope::ID SSID) {
+    this->SSID = SSID;
   }
 
+  /// Sets the ordering constraint and the synchronization scope ID of this load
+  /// instruction.
   void setAtomic(AtomicOrdering Ordering,
-                 SynchronizationScope SynchScope = CrossThread) {
+                 SyncScope::ID SSID = SyncScope::System) {
     setOrdering(Ordering);
-    setSynchScope(SynchScope);
+    setSyncScopeID(SSID);
   }
 
   bool isSimple() const { return !isAtomic() && !isVolatile(); }
@@ -297,6 +291,11 @@ private:
   void setInstructionSubclassData(unsigned short D) {
     Instruction::setInstructionSubclassData(D);
   }
+
+  /// The synchronization scope ID of this load instruction.  Not quite enough
+  /// room in SubClassData for everything, so synchronization scope ID gets its
+  /// own field.
+  SyncScope::ID SSID;
 };
 
 //===----------------------------------------------------------------------===//
@@ -325,11 +324,10 @@ public:
             unsigned Align, BasicBlock *InsertAtEnd);
   StoreInst(Value *Val, Value *Ptr, bool isVolatile,
             unsigned Align, AtomicOrdering Order,
-            SynchronizationScope SynchScope = CrossThread,
+            SyncScope::ID SSID = SyncScope::System,
             Instruction *InsertBefore = nullptr);
   StoreInst(Value *Val, Value *Ptr, bool isVolatile,
-            unsigned Align, AtomicOrdering Order,
-            SynchronizationScope SynchScope,
+            unsigned Align, AtomicOrdering Order, SyncScope::ID SSID,
             BasicBlock *InsertAtEnd);
 
   // allocate space for exactly two operands
@@ -356,34 +354,34 @@ public:
 
   void setAlignment(unsigned Align);
 
-  /// Returns the ordering effect of this store.
+  /// Returns the ordering constraint of this store instruction.
   AtomicOrdering getOrdering() const {
     return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
   }
 
-  /// Set the ordering constraint on this store.  May not be Acquire or
-  /// AcquireRelease.
+  /// Sets the ordering constraint of this store instruction.  May not be
+  /// Acquire or AcquireRelease.
   void setOrdering(AtomicOrdering Ordering) {
     setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
                                ((unsigned)Ordering << 7));
   }
 
-  SynchronizationScope getSynchScope() const {
-    return SynchronizationScope((getSubclassDataFromInstruction() >> 6) & 1);
+  /// Returns the synchronization scope ID of this store instruction.
+  SyncScope::ID getSyncScopeID() const {
+    return SSID;
   }
 
-  /// Specify whether this store instruction is ordered with respect to all
-  /// concurrently executing threads, or only with respect to signal handlers
-  /// executing in the same thread.
-  void setSynchScope(SynchronizationScope xthread) {
-    setInstructionSubclassData((getSubclassDataFromInstruction() & ~(1 << 6)) |
-                               (xthread << 6));
+  /// Sets the synchronization scope ID of this store instruction.
+  void setSyncScopeID(SyncScope::ID SSID) {
+    this->SSID = SSID;
   }
 
+  /// Sets the ordering constraint and the synchronization scope ID of this
+  /// store instruction.
   void setAtomic(AtomicOrdering Ordering,
-                 SynchronizationScope SynchScope = CrossThread) {
+                 SyncScope::ID SSID = SyncScope::System) {
     setOrdering(Ordering);
-    setSynchScope(SynchScope);
+    setSyncScopeID(SSID);
   }
 
   bool isSimple() const { return !isAtomic() && !isVolatile(); }
@@ -421,6 +419,11 @@ private:
   void setInstructionSubclassData(unsigned short D) {
     Instruction::setInstructionSubclassData(D);
   }
+
+  /// The synchronization scope ID of this store instruction.  Not quite enough
+  /// room in SubClassData for everything, so synchronization scope ID gets its
+  /// own field.
+  SyncScope::ID SSID;
 };
 
 template <>
@@ -435,7 +438,7 @@ DEFINE_TRANSPARENT_OPERAND_ACCESSORS(Sto
 
 /// An instruction for ordering other memory operations.
 class FenceInst : public Instruction {
-  void Init(AtomicOrdering Ordering, SynchronizationScope SynchScope);
+  void Init(AtomicOrdering Ordering, SyncScope::ID SSID);
 
 protected:
   // Note: Instruction needs to be a friend here to call cloneImpl.
@@ -447,10 +450,9 @@ public:
   // Ordering may only be Acquire, Release, AcquireRelease, or
   // SequentiallyConsistent.
   FenceInst(LLVMContext &C, AtomicOrdering Ordering,
-            SynchronizationScope SynchScope = CrossThread,
+            SyncScope::ID SSID = SyncScope::System,
             Instruction *InsertBefore = nullptr);
-  FenceInst(LLVMContext &C, AtomicOrdering Ordering,
-            SynchronizationScope SynchScope,
+  FenceInst(LLVMContext &C, AtomicOrdering Ordering, SyncScope::ID SSID,
             BasicBlock *InsertAtEnd);
 
   // allocate space for exactly zero operands
@@ -458,28 +460,26 @@ public:
     return User::operator new(s, 0);
   }
 
-  /// Returns the ordering effect of this fence.
+  /// Returns the ordering constraint of this fence instruction.
   AtomicOrdering getOrdering() const {
     return AtomicOrdering(getSubclassDataFromInstruction() >> 1);
   }
 
-  /// Set the ordering constraint on this fence.  May only be Acquire, Release,
-  /// AcquireRelease, or SequentiallyConsistent.
+  /// Sets the ordering constraint of this fence instruction.  May only be
+  /// Acquire, Release, AcquireRelease, or SequentiallyConsistent.
   void setOrdering(AtomicOrdering Ordering) {
     setInstructionSubclassData((getSubclassDataFromInstruction() & 1) |
                                ((unsigned)Ordering << 1));
   }
 
-  SynchronizationScope getSynchScope() const {
-    return SynchronizationScope(getSubclassDataFromInstruction() & 1);
+  /// Returns the synchronization scope ID of this fence instruction.
+  SyncScope::ID getSyncScopeID() const {
+    return SSID;
   }
 
-  /// Specify whether this fence orders other operations with respect to all
-  /// concurrently executing threads, or only with respect to signal handlers
-  /// executing in the same thread.
-  void setSynchScope(SynchronizationScope xthread) {
-    setInstructionSubclassData((getSubclassDataFromInstruction() & ~1) |
-                               xthread);
+  /// Sets the synchronization scope ID of this fence instruction.
+  void setSyncScopeID(SyncScope::ID SSID) {
+    this->SSID = SSID;
   }
 
   // Methods for support type inquiry through isa, cast, and dyn_cast:
@@ -496,6 +496,11 @@ private:
   void setInstructionSubclassData(unsigned short D) {
     Instruction::setInstructionSubclassData(D);
   }
+
+  /// The synchronization scope ID of this fence instruction.  Not quite enough
+  /// room in SubClassData for everything, so synchronization scope ID gets its
+  /// own field.
+  SyncScope::ID SSID;
 };
 
 //===----------------------------------------------------------------------===//
@@ -509,7 +514,7 @@ private:
 class AtomicCmpXchgInst : public Instruction {
   void Init(Value *Ptr, Value *Cmp, Value *NewVal,
             AtomicOrdering SuccessOrdering, AtomicOrdering FailureOrdering,
-            SynchronizationScope SynchScope);
+            SyncScope::ID SSID);
 
 protected:
   // Note: Instruction needs to be a friend here to call cloneImpl.
@@ -521,13 +526,11 @@ public:
   AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
                     AtomicOrdering SuccessOrdering,
                     AtomicOrdering FailureOrdering,
-                    SynchronizationScope SynchScope,
-                    Instruction *InsertBefore = nullptr);
+                    SyncScope::ID SSID, Instruction *InsertBefore = nullptr);
   AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
                     AtomicOrdering SuccessOrdering,
                     AtomicOrdering FailureOrdering,
-                    SynchronizationScope SynchScope,
-                    BasicBlock *InsertAtEnd);
+                    SyncScope::ID SSID, BasicBlock *InsertAtEnd);
 
   // allocate space for exactly three operands
   void *operator new(size_t s) {
@@ -561,7 +564,12 @@ public:
   /// Transparently provide more efficient getOperand methods.
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
 
-  /// Set the ordering constraint on this cmpxchg.
+  /// Returns the success ordering constraint of this cmpxchg instruction.
+  AtomicOrdering getSuccessOrdering() const {
+    return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
+  }
+
+  /// Sets the success ordering constraint of this cmpxchg instruction.
   void setSuccessOrdering(AtomicOrdering Ordering) {
     assert(Ordering != AtomicOrdering::NotAtomic &&
            "CmpXchg instructions can only be atomic.");
@@ -569,6 +577,12 @@ public:
                                ((unsigned)Ordering << 2));
   }
 
+  /// Returns the failure ordering constraint of this cmpxchg instruction.
+  AtomicOrdering getFailureOrdering() const {
+    return AtomicOrdering((getSubclassDataFromInstruction() >> 5) & 7);
+  }
+
+  /// Sets the failure ordering constraint of this cmpxchg instruction.
   void setFailureOrdering(AtomicOrdering Ordering) {
     assert(Ordering != AtomicOrdering::NotAtomic &&
            "CmpXchg instructions can only be atomic.");
@@ -576,28 +590,14 @@ public:
                                ((unsigned)Ordering << 5));
   }
 
-  /// Specify whether this cmpxchg is atomic and orders other operations with
-  /// respect to all concurrently executing threads, or only with respect to
-  /// signal handlers executing in the same thread.
-  void setSynchScope(SynchronizationScope SynchScope) {
-    setInstructionSubclassData((getSubclassDataFromInstruction() & ~2) |
-                               (SynchScope << 1));
+  /// Returns the synchronization scope ID of this cmpxchg instruction.
+  SyncScope::ID getSyncScopeID() const {
+    return SSID;
   }
 
-  /// Returns the ordering constraint on this cmpxchg.
-  AtomicOrdering getSuccessOrdering() const {
-    return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
-  }
-
-  /// Returns the ordering constraint on this cmpxchg.
-  AtomicOrdering getFailureOrdering() const {
-    return AtomicOrdering((getSubclassDataFromInstruction() >> 5) & 7);
-  }
-
-  /// Returns whether this cmpxchg is atomic between threads or only within a
-  /// single thread.
-  SynchronizationScope getSynchScope() const {
-    return SynchronizationScope((getSubclassDataFromInstruction() & 2) >> 1);
+  /// Sets the synchronization scope ID of this cmpxchg instruction.
+  void setSyncScopeID(SyncScope::ID SSID) {
+    this->SSID = SSID;
   }
 
   Value *getPointerOperand() { return getOperand(0); }
@@ -652,6 +652,11 @@ private:
   void setInstructionSubclassData(unsigned short D) {
     Instruction::setInstructionSubclassData(D);
   }
+
+  /// The synchronization scope ID of this cmpxchg instruction.  Not quite
+  /// enough room in SubClassData for everything, so synchronization scope ID
+  /// gets its own field.
+  SyncScope::ID SSID;
 };
 
 template <>
@@ -711,10 +716,10 @@ public:
   };
 
   AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
-                AtomicOrdering Ordering, SynchronizationScope SynchScope,
+                AtomicOrdering Ordering, SyncScope::ID SSID,
                 Instruction *InsertBefore = nullptr);
   AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
-                AtomicOrdering Ordering, SynchronizationScope SynchScope,
+                AtomicOrdering Ordering, SyncScope::ID SSID,
                 BasicBlock *InsertAtEnd);
 
   // allocate space for exactly two operands
@@ -748,7 +753,12 @@ public:
   /// Transparently provide more efficient getOperand methods.
   DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
 
-  /// Set the ordering constraint on this RMW.
+  /// Returns the ordering constraint of this rmw instruction.
+  AtomicOrdering getOrdering() const {
+    return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
+  }
+
+  /// Sets the ordering constraint of this rmw instruction.
   void setOrdering(AtomicOrdering Ordering) {
     assert(Ordering != AtomicOrdering::NotAtomic &&
            "atomicrmw instructions can only be atomic.");
@@ -756,23 +766,14 @@ public:
                                ((unsigned)Ordering << 2));
   }
 
-  /// Specify whether this RMW orders other operations with respect to all
-  /// concurrently executing threads, or only with respect to signal handlers
-  /// executing in the same thread.
-  void setSynchScope(SynchronizationScope SynchScope) {
-    setInstructionSubclassData((getSubclassDataFromInstruction() & ~2) |
-                               (SynchScope << 1));
+  /// Returns the synchronization scope ID of this rmw instruction.
+  SyncScope::ID getSyncScopeID() const {
+    return SSID;
   }
 
-  /// Returns the ordering constraint on this RMW.
-  AtomicOrdering getOrdering() const {
-    return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
-  }
-
-  /// Returns whether this RMW is atomic between threads or only within a
-  /// single thread.
-  SynchronizationScope getSynchScope() const {
-    return SynchronizationScope((getSubclassDataFromInstruction() & 2) >> 1);
+  /// Sets the synchronization scope ID of this rmw instruction.
+  void setSyncScopeID(SyncScope::ID SSID) {
+    this->SSID = SSID;
   }
 
   Value *getPointerOperand() { return getOperand(0); }
@@ -797,13 +798,18 @@ public:
 
 private:
   void Init(BinOp Operation, Value *Ptr, Value *Val,
-            AtomicOrdering Ordering, SynchronizationScope SynchScope);
+            AtomicOrdering Ordering, SyncScope::ID SSID);
 
   // Shadow Instruction::setInstructionSubclassData with a private forwarding
   // method so that subclasses cannot accidentally use it.
   void setInstructionSubclassData(unsigned short D) {
     Instruction::setInstructionSubclassData(D);
   }
+
+  /// The synchronization scope ID of this rmw instruction.  Not quite enough
+  /// room in SubClassData for everything, so synchronization scope ID gets its
+  /// own field.
+  SyncScope::ID SSID;
 };
 
 template <>

Modified: llvm/trunk/include/llvm/IR/LLVMContext.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/IR/LLVMContext.h?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/include/llvm/IR/LLVMContext.h (original)
+++ llvm/trunk/include/llvm/IR/LLVMContext.h Tue Jul 11 15:23:00 2017
@@ -42,6 +42,24 @@ class Output;
 
 } // end namespace yaml
 
+namespace SyncScope {
+
+typedef uint8_t ID;
+
+/// Known synchronization scope IDs, which always have the same value.  All
+/// synchronization scope IDs that LLVM has special knowledge of are listed
+/// here.  Additionally, this scheme allows LLVM to efficiently check for
+/// specific synchronization scope ID without comparing strings.
+enum {
+  /// Synchronized with respect to signal handlers executing in the same thread.
+  SingleThread = 0,
+
+  /// Synchronized with respect to all concurrently executing threads.
+  System = 1
+};
+
+} // end namespace SyncScope
+
 /// This is an important class for using LLVM in a threaded context.  It
 /// (opaquely) owns and manages the core "global" data of LLVM's core
 /// infrastructure, including the type and constant uniquing tables.
@@ -111,6 +129,16 @@ public:
   /// tag registered with an LLVMContext has an unique ID.
   uint32_t getOperandBundleTagID(StringRef Tag) const;
 
+  /// getOrInsertSyncScopeID - Maps synchronization scope name to
+  /// synchronization scope ID.  Every synchronization scope registered with
+  /// LLVMContext has unique ID except pre-defined ones.
+  SyncScope::ID getOrInsertSyncScopeID(StringRef SSN);
+
+  /// getSyncScopeNames - Populates client supplied SmallVector with
+  /// synchronization scope names registered with LLVMContext.  Synchronization
+  /// scope names are ordered by increasing synchronization scope IDs.
+  void getSyncScopeNames(SmallVectorImpl<StringRef> &SSNs) const;
+
   /// Define the GC for a function
   void setGC(const Function &Fn, std::string GCName);
 

Modified: llvm/trunk/lib/AsmParser/LLLexer.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/AsmParser/LLLexer.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/AsmParser/LLLexer.cpp (original)
+++ llvm/trunk/lib/AsmParser/LLLexer.cpp Tue Jul 11 15:23:00 2017
@@ -542,7 +542,7 @@ lltok::Kind LLLexer::LexIdentifier() {
   KEYWORD(release);
   KEYWORD(acq_rel);
   KEYWORD(seq_cst);
-  KEYWORD(singlethread);
+  KEYWORD(syncscope);
 
   KEYWORD(nnan);
   KEYWORD(ninf);

Modified: llvm/trunk/lib/AsmParser/LLParser.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/AsmParser/LLParser.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/AsmParser/LLParser.cpp (original)
+++ llvm/trunk/lib/AsmParser/LLParser.cpp Tue Jul 11 15:23:00 2017
@@ -1919,20 +1919,42 @@ bool LLParser::parseAllocSizeArguments(u
 }
 
 /// ParseScopeAndOrdering
-///   if isAtomic: ::= 'singlethread'? AtomicOrdering
+///   if isAtomic: ::= SyncScope? AtomicOrdering
 ///   else: ::=
 ///
 /// This sets Scope and Ordering to the parsed values.
-bool LLParser::ParseScopeAndOrdering(bool isAtomic, SynchronizationScope &Scope,
+bool LLParser::ParseScopeAndOrdering(bool isAtomic, SyncScope::ID &SSID,
                                      AtomicOrdering &Ordering) {
   if (!isAtomic)
     return false;
 
-  Scope = CrossThread;
-  if (EatIfPresent(lltok::kw_singlethread))
-    Scope = SingleThread;
+  return ParseScope(SSID) || ParseOrdering(Ordering);
+}
+
+/// ParseScope
+///   ::= syncscope("singlethread" | "<target scope>")?
+///
+/// This sets synchronization scope ID to the ID of the parsed value.
+bool LLParser::ParseScope(SyncScope::ID &SSID) {
+  SSID = SyncScope::System;
+  if (EatIfPresent(lltok::kw_syncscope)) {
+    auto StartParenAt = Lex.getLoc();
+    if (!EatIfPresent(lltok::lparen))
+      return Error(StartParenAt, "Expected '(' in syncscope");
+
+    std::string SSN;
+    auto SSNAt = Lex.getLoc();
+    if (ParseStringConstant(SSN))
+      return Error(SSNAt, "Expected synchronization scope name");
+
+    auto EndParenAt = Lex.getLoc();
+    if (!EatIfPresent(lltok::rparen))
+      return Error(EndParenAt, "Expected ')' in syncscope");
 
-  return ParseOrdering(Ordering);
+    SSID = Context.getOrInsertSyncScopeID(SSN);
+  }
+
+  return false;
 }
 
 /// ParseOrdering
@@ -6100,7 +6122,7 @@ int LLParser::ParseLoad(Instruction *&In
   bool AteExtraComma = false;
   bool isAtomic = false;
   AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
-  SynchronizationScope Scope = CrossThread;
+  SyncScope::ID SSID = SyncScope::System;
 
   if (Lex.getKind() == lltok::kw_atomic) {
     isAtomic = true;
@@ -6118,7 +6140,7 @@ int LLParser::ParseLoad(Instruction *&In
   if (ParseType(Ty) ||
       ParseToken(lltok::comma, "expected comma after load's type") ||
       ParseTypeAndValue(Val, Loc, PFS) ||
-      ParseScopeAndOrdering(isAtomic, Scope, Ordering) ||
+      ParseScopeAndOrdering(isAtomic, SSID, Ordering) ||
       ParseOptionalCommaAlign(Alignment, AteExtraComma))
     return true;
 
@@ -6134,7 +6156,7 @@ int LLParser::ParseLoad(Instruction *&In
     return Error(ExplicitTypeLoc,
                  "explicit pointee type doesn't match operand's pointee type");
 
-  Inst = new LoadInst(Ty, Val, "", isVolatile, Alignment, Ordering, Scope);
+  Inst = new LoadInst(Ty, Val, "", isVolatile, Alignment, Ordering, SSID);
   return AteExtraComma ? InstExtraComma : InstNormal;
 }
 
@@ -6149,7 +6171,7 @@ int LLParser::ParseStore(Instruction *&I
   bool AteExtraComma = false;
   bool isAtomic = false;
   AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
-  SynchronizationScope Scope = CrossThread;
+  SyncScope::ID SSID = SyncScope::System;
 
   if (Lex.getKind() == lltok::kw_atomic) {
     isAtomic = true;
@@ -6165,7 +6187,7 @@ int LLParser::ParseStore(Instruction *&I
   if (ParseTypeAndValue(Val, Loc, PFS) ||
       ParseToken(lltok::comma, "expected ',' after store operand") ||
       ParseTypeAndValue(Ptr, PtrLoc, PFS) ||
-      ParseScopeAndOrdering(isAtomic, Scope, Ordering) ||
+      ParseScopeAndOrdering(isAtomic, SSID, Ordering) ||
       ParseOptionalCommaAlign(Alignment, AteExtraComma))
     return true;
 
@@ -6181,7 +6203,7 @@ int LLParser::ParseStore(Instruction *&I
       Ordering == AtomicOrdering::AcquireRelease)
     return Error(Loc, "atomic store cannot use Acquire ordering");
 
-  Inst = new StoreInst(Val, Ptr, isVolatile, Alignment, Ordering, Scope);
+  Inst = new StoreInst(Val, Ptr, isVolatile, Alignment, Ordering, SSID);
   return AteExtraComma ? InstExtraComma : InstNormal;
 }
 
@@ -6193,7 +6215,7 @@ int LLParser::ParseCmpXchg(Instruction *
   bool AteExtraComma = false;
   AtomicOrdering SuccessOrdering = AtomicOrdering::NotAtomic;
   AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic;
-  SynchronizationScope Scope = CrossThread;
+  SyncScope::ID SSID = SyncScope::System;
   bool isVolatile = false;
   bool isWeak = false;
 
@@ -6208,7 +6230,7 @@ int LLParser::ParseCmpXchg(Instruction *
       ParseTypeAndValue(Cmp, CmpLoc, PFS) ||
       ParseToken(lltok::comma, "expected ',' after cmpxchg cmp operand") ||
       ParseTypeAndValue(New, NewLoc, PFS) ||
-      ParseScopeAndOrdering(true /*Always atomic*/, Scope, SuccessOrdering) ||
+      ParseScopeAndOrdering(true /*Always atomic*/, SSID, SuccessOrdering) ||
       ParseOrdering(FailureOrdering))
     return true;
 
@@ -6231,7 +6253,7 @@ int LLParser::ParseCmpXchg(Instruction *
   if (!New->getType()->isFirstClassType())
     return Error(NewLoc, "cmpxchg operand must be a first class value");
   AtomicCmpXchgInst *CXI = new AtomicCmpXchgInst(
-      Ptr, Cmp, New, SuccessOrdering, FailureOrdering, Scope);
+      Ptr, Cmp, New, SuccessOrdering, FailureOrdering, SSID);
   CXI->setVolatile(isVolatile);
   CXI->setWeak(isWeak);
   Inst = CXI;
@@ -6245,7 +6267,7 @@ int LLParser::ParseAtomicRMW(Instruction
   Value *Ptr, *Val; LocTy PtrLoc, ValLoc;
   bool AteExtraComma = false;
   AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
-  SynchronizationScope Scope = CrossThread;
+  SyncScope::ID SSID = SyncScope::System;
   bool isVolatile = false;
   AtomicRMWInst::BinOp Operation;
 
@@ -6271,7 +6293,7 @@ int LLParser::ParseAtomicRMW(Instruction
   if (ParseTypeAndValue(Ptr, PtrLoc, PFS) ||
       ParseToken(lltok::comma, "expected ',' after atomicrmw address") ||
       ParseTypeAndValue(Val, ValLoc, PFS) ||
-      ParseScopeAndOrdering(true /*Always atomic*/, Scope, Ordering))
+      ParseScopeAndOrdering(true /*Always atomic*/, SSID, Ordering))
     return true;
 
   if (Ordering == AtomicOrdering::Unordered)
@@ -6288,7 +6310,7 @@ int LLParser::ParseAtomicRMW(Instruction
                          " integer");
 
   AtomicRMWInst *RMWI =
-    new AtomicRMWInst(Operation, Ptr, Val, Ordering, Scope);
+    new AtomicRMWInst(Operation, Ptr, Val, Ordering, SSID);
   RMWI->setVolatile(isVolatile);
   Inst = RMWI;
   return AteExtraComma ? InstExtraComma : InstNormal;
@@ -6298,8 +6320,8 @@ int LLParser::ParseAtomicRMW(Instruction
 ///   ::= 'fence' 'singlethread'? AtomicOrdering
 int LLParser::ParseFence(Instruction *&Inst, PerFunctionState &PFS) {
   AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
-  SynchronizationScope Scope = CrossThread;
-  if (ParseScopeAndOrdering(true /*Always atomic*/, Scope, Ordering))
+  SyncScope::ID SSID = SyncScope::System;
+  if (ParseScopeAndOrdering(true /*Always atomic*/, SSID, Ordering))
     return true;
 
   if (Ordering == AtomicOrdering::Unordered)
@@ -6307,7 +6329,7 @@ int LLParser::ParseFence(Instruction *&I
   if (Ordering == AtomicOrdering::Monotonic)
     return TokError("fence cannot be monotonic");
 
-  Inst = new FenceInst(Context, Ordering, Scope);
+  Inst = new FenceInst(Context, Ordering, SSID);
   return InstNormal;
 }
 

Modified: llvm/trunk/lib/AsmParser/LLParser.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/AsmParser/LLParser.h?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/AsmParser/LLParser.h (original)
+++ llvm/trunk/lib/AsmParser/LLParser.h Tue Jul 11 15:23:00 2017
@@ -241,8 +241,9 @@ namespace llvm {
     bool ParseOptionalCallingConv(unsigned &CC);
     bool ParseOptionalAlignment(unsigned &Alignment);
     bool ParseOptionalDerefAttrBytes(lltok::Kind AttrKind, uint64_t &Bytes);
-    bool ParseScopeAndOrdering(bool isAtomic, SynchronizationScope &Scope,
+    bool ParseScopeAndOrdering(bool isAtomic, SyncScope::ID &SSID,
                                AtomicOrdering &Ordering);
+    bool ParseScope(SyncScope::ID &SSID);
     bool ParseOrdering(AtomicOrdering &Ordering);
     bool ParseOptionalStackAlignment(unsigned &Alignment);
     bool ParseOptionalCommaAlign(unsigned &Alignment, bool &AteExtraComma);

Modified: llvm/trunk/lib/AsmParser/LLToken.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/AsmParser/LLToken.h?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/AsmParser/LLToken.h (original)
+++ llvm/trunk/lib/AsmParser/LLToken.h Tue Jul 11 15:23:00 2017
@@ -93,7 +93,7 @@ enum Kind {
   kw_release,
   kw_acq_rel,
   kw_seq_cst,
-  kw_singlethread,
+  kw_syncscope,
   kw_nnan,
   kw_ninf,
   kw_nsz,

Modified: llvm/trunk/lib/Bitcode/Reader/BitcodeReader.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Bitcode/Reader/BitcodeReader.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/Bitcode/Reader/BitcodeReader.cpp (original)
+++ llvm/trunk/lib/Bitcode/Reader/BitcodeReader.cpp Tue Jul 11 15:23:00 2017
@@ -513,6 +513,7 @@ class BitcodeReader : public BitcodeRead
   TBAAVerifier TBAAVerifyHelper;
 
   std::vector<std::string> BundleTags;
+  SmallVector<SyncScope::ID, 8> SSIDs;
 
 public:
   BitcodeReader(BitstreamCursor Stream, StringRef Strtab,
@@ -648,6 +649,7 @@ private:
   Error parseTypeTable();
   Error parseTypeTableBody();
   Error parseOperandBundleTags();
+  Error parseSyncScopeNames();
 
   Expected<Value *> recordValue(SmallVectorImpl<uint64_t> &Record,
                                 unsigned NameIndex, Triple &TT);
@@ -668,6 +670,8 @@ private:
   Error findFunctionInStream(
       Function *F,
       DenseMap<Function *, uint64_t>::iterator DeferredFunctionInfoIterator);
+
+  SyncScope::ID getDecodedSyncScopeID(unsigned Val);
 };
 
 /// Class to manage reading and parsing function summary index bitcode
@@ -998,14 +1002,6 @@ static AtomicOrdering getDecodedOrdering
   }
 }
 
-static SynchronizationScope getDecodedSynchScope(unsigned Val) {
-  switch (Val) {
-  case bitc::SYNCHSCOPE_SINGLETHREAD: return SingleThread;
-  default: // Map unknown scopes to cross-thread.
-  case bitc::SYNCHSCOPE_CROSSTHREAD: return CrossThread;
-  }
-}
-
 static Comdat::SelectionKind getDecodedComdatSelectionKind(unsigned Val) {
   switch (Val) {
   default: // Map unknown selection kinds to any.
@@ -1745,6 +1741,44 @@ Error BitcodeReader::parseOperandBundleT
   }
 }
 
+Error BitcodeReader::parseSyncScopeNames() {
+  if (Stream.EnterSubBlock(bitc::SYNC_SCOPE_NAMES_BLOCK_ID))
+    return error("Invalid record");
+
+  if (!SSIDs.empty())
+    return error("Invalid multiple synchronization scope names blocks");
+
+  SmallVector<uint64_t, 64> Record;
+  while (true) {
+    BitstreamEntry Entry = Stream.advanceSkippingSubblocks();
+    switch (Entry.Kind) {
+    case BitstreamEntry::SubBlock: // Handled for us already.
+    case BitstreamEntry::Error:
+      return error("Malformed block");
+    case BitstreamEntry::EndBlock:
+      if (SSIDs.empty())
+        return error("Invalid empty synchronization scope names block");
+      return Error::success();
+    case BitstreamEntry::Record:
+      // The interesting case.
+      break;
+    }
+
+    // Synchronization scope names are implicitly mapped to synchronization
+    // scope IDs by their order.
+
+    if (Stream.readRecord(Entry.ID, Record) != bitc::SYNC_SCOPE_NAME)
+      return error("Invalid record");
+
+    SmallString<16> SSN;
+    if (convertToString(Record, 0, SSN))
+      return error("Invalid record");
+
+    SSIDs.push_back(Context.getOrInsertSyncScopeID(SSN));
+    Record.clear();
+  }
+}
+
 /// Associate a value with its name from the given index in the provided record.
 Expected<Value *> BitcodeReader::recordValue(SmallVectorImpl<uint64_t> &Record,
                                              unsigned NameIndex, Triple &TT) {
@@ -3132,6 +3166,10 @@ Error BitcodeReader::parseModule(uint64_
         if (Error Err = parseOperandBundleTags())
           return Err;
         break;
+      case bitc::SYNC_SCOPE_NAMES_BLOCK_ID:
+        if (Error Err = parseSyncScopeNames())
+          return Err;
+        break;
       }
       continue;
 
@@ -4204,7 +4242,7 @@ Error BitcodeReader::parseFunctionBody(F
       break;
     }
     case bitc::FUNC_CODE_INST_LOADATOMIC: {
-       // LOADATOMIC: [opty, op, align, vol, ordering, synchscope]
+       // LOADATOMIC: [opty, op, align, vol, ordering, ssid]
       unsigned OpNum = 0;
       Value *Op;
       if (getValueTypePair(Record, OpNum, NextValueNo, Op) ||
@@ -4226,12 +4264,12 @@ Error BitcodeReader::parseFunctionBody(F
         return error("Invalid record");
       if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0)
         return error("Invalid record");
-      SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
+      SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
 
       unsigned Align;
       if (Error Err = parseAlignmentValue(Record[OpNum], Align))
         return Err;
-      I = new LoadInst(Op, "", Record[OpNum+1], Align, Ordering, SynchScope);
+      I = new LoadInst(Op, "", Record[OpNum+1], Align, Ordering, SSID);
 
       InstructionList.push_back(I);
       break;
@@ -4260,7 +4298,7 @@ Error BitcodeReader::parseFunctionBody(F
     }
     case bitc::FUNC_CODE_INST_STOREATOMIC:
     case bitc::FUNC_CODE_INST_STOREATOMIC_OLD: {
-      // STOREATOMIC: [ptrty, ptr, val, align, vol, ordering, synchscope]
+      // STOREATOMIC: [ptrty, ptr, val, align, vol, ordering, ssid]
       unsigned OpNum = 0;
       Value *Val, *Ptr;
       if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) ||
@@ -4280,20 +4318,20 @@ Error BitcodeReader::parseFunctionBody(F
           Ordering == AtomicOrdering::Acquire ||
           Ordering == AtomicOrdering::AcquireRelease)
         return error("Invalid record");
-      SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
+      SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
       if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0)
         return error("Invalid record");
 
       unsigned Align;
       if (Error Err = parseAlignmentValue(Record[OpNum], Align))
         return Err;
-      I = new StoreInst(Val, Ptr, Record[OpNum+1], Align, Ordering, SynchScope);
+      I = new StoreInst(Val, Ptr, Record[OpNum+1], Align, Ordering, SSID);
       InstructionList.push_back(I);
       break;
     }
     case bitc::FUNC_CODE_INST_CMPXCHG_OLD:
     case bitc::FUNC_CODE_INST_CMPXCHG: {
-      // CMPXCHG:[ptrty, ptr, cmp, new, vol, successordering, synchscope,
+      // CMPXCHG:[ptrty, ptr, cmp, new, vol, successordering, ssid,
       //          failureordering?, isweak?]
       unsigned OpNum = 0;
       Value *Ptr, *Cmp, *New;
@@ -4310,7 +4348,7 @@ Error BitcodeReader::parseFunctionBody(F
       if (SuccessOrdering == AtomicOrdering::NotAtomic ||
           SuccessOrdering == AtomicOrdering::Unordered)
         return error("Invalid record");
-      SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 2]);
+      SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 2]);
 
       if (Error Err = typeCheckLoadStoreInst(Cmp->getType(), Ptr->getType()))
         return Err;
@@ -4322,7 +4360,7 @@ Error BitcodeReader::parseFunctionBody(F
         FailureOrdering = getDecodedOrdering(Record[OpNum + 3]);
 
       I = new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering, FailureOrdering,
-                                SynchScope);
+                                SSID);
       cast<AtomicCmpXchgInst>(I)->setVolatile(Record[OpNum]);
 
       if (Record.size() < 8) {
@@ -4339,7 +4377,7 @@ Error BitcodeReader::parseFunctionBody(F
       break;
     }
     case bitc::FUNC_CODE_INST_ATOMICRMW: {
-      // ATOMICRMW:[ptrty, ptr, val, op, vol, ordering, synchscope]
+      // ATOMICRMW:[ptrty, ptr, val, op, vol, ordering, ssid]
       unsigned OpNum = 0;
       Value *Ptr, *Val;
       if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) ||
@@ -4356,13 +4394,13 @@ Error BitcodeReader::parseFunctionBody(F
       if (Ordering == AtomicOrdering::NotAtomic ||
           Ordering == AtomicOrdering::Unordered)
         return error("Invalid record");
-      SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]);
-      I = new AtomicRMWInst(Operation, Ptr, Val, Ordering, SynchScope);
+      SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
+      I = new AtomicRMWInst(Operation, Ptr, Val, Ordering, SSID);
       cast<AtomicRMWInst>(I)->setVolatile(Record[OpNum+1]);
       InstructionList.push_back(I);
       break;
     }
-    case bitc::FUNC_CODE_INST_FENCE: { // FENCE:[ordering, synchscope]
+    case bitc::FUNC_CODE_INST_FENCE: { // FENCE:[ordering, ssid]
       if (2 != Record.size())
         return error("Invalid record");
       AtomicOrdering Ordering = getDecodedOrdering(Record[0]);
@@ -4370,8 +4408,8 @@ Error BitcodeReader::parseFunctionBody(F
           Ordering == AtomicOrdering::Unordered ||
           Ordering == AtomicOrdering::Monotonic)
         return error("Invalid record");
-      SynchronizationScope SynchScope = getDecodedSynchScope(Record[1]);
-      I = new FenceInst(Context, Ordering, SynchScope);
+      SyncScope::ID SSID = getDecodedSyncScopeID(Record[1]);
+      I = new FenceInst(Context, Ordering, SSID);
       InstructionList.push_back(I);
       break;
     }
@@ -4567,6 +4605,14 @@ Error BitcodeReader::findFunctionInStrea
   return Error::success();
 }
 
+SyncScope::ID BitcodeReader::getDecodedSyncScopeID(unsigned Val) {
+  if (Val == SyncScope::SingleThread || Val == SyncScope::System)
+    return SyncScope::ID(Val);
+  if (Val >= SSIDs.size())
+    return SyncScope::System; // Map unknown synchronization scopes to system.
+  return SSIDs[Val];
+}
+
 //===----------------------------------------------------------------------===//
 // GVMaterializer implementation
 //===----------------------------------------------------------------------===//

Modified: llvm/trunk/lib/Bitcode/Writer/BitcodeWriter.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Bitcode/Writer/BitcodeWriter.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/Bitcode/Writer/BitcodeWriter.cpp (original)
+++ llvm/trunk/lib/Bitcode/Writer/BitcodeWriter.cpp Tue Jul 11 15:23:00 2017
@@ -266,6 +266,7 @@ private:
                                     const GlobalObject &GO);
   void writeModuleMetadataKinds();
   void writeOperandBundleTags();
+  void writeSyncScopeNames();
   void writeConstants(unsigned FirstVal, unsigned LastVal, bool isGlobal);
   void writeModuleConstants();
   bool pushValueAndType(const Value *V, unsigned InstID,
@@ -316,6 +317,10 @@ private:
     return VE.getValueID(VI.getValue());
   }
   std::map<GlobalValue::GUID, unsigned> &valueIds() { return GUIDToValueIdMap; }
+
+  unsigned getEncodedSyncScopeID(SyncScope::ID SSID) {
+    return unsigned(SSID);
+  }
 };
 
 /// Class to manage the bitcode writing for a combined index.
@@ -485,14 +490,6 @@ static unsigned getEncodedOrdering(Atomi
   llvm_unreachable("Invalid ordering");
 }
 
-static unsigned getEncodedSynchScope(SynchronizationScope SynchScope) {
-  switch (SynchScope) {
-  case SingleThread: return bitc::SYNCHSCOPE_SINGLETHREAD;
-  case CrossThread: return bitc::SYNCHSCOPE_CROSSTHREAD;
-  }
-  llvm_unreachable("Invalid synch scope");
-}
-
 static void writeStringRecord(BitstreamWriter &Stream, unsigned Code,
                               StringRef Str, unsigned AbbrevToUse) {
   SmallVector<unsigned, 64> Vals;
@@ -2042,6 +2039,24 @@ void ModuleBitcodeWriter::writeOperandBu
   Stream.ExitBlock();
 }
 
+void ModuleBitcodeWriter::writeSyncScopeNames() {
+  SmallVector<StringRef, 8> SSNs;
+  M.getContext().getSyncScopeNames(SSNs);
+  if (SSNs.empty())
+    return;
+
+  Stream.EnterSubblock(bitc::SYNC_SCOPE_NAMES_BLOCK_ID, 2);
+
+  SmallVector<uint64_t, 64> Record;
+  for (auto SSN : SSNs) {
+    Record.append(SSN.begin(), SSN.end());
+    Stream.EmitRecord(bitc::SYNC_SCOPE_NAME, Record, 0);
+    Record.clear();
+  }
+
+  Stream.ExitBlock();
+}
+
 static void emitSignedInt64(SmallVectorImpl<uint64_t> &Vals, uint64_t V) {
   if ((int64_t)V >= 0)
     Vals.push_back(V << 1);
@@ -2658,7 +2673,7 @@ void ModuleBitcodeWriter::writeInstructi
     Vals.push_back(cast<LoadInst>(I).isVolatile());
     if (cast<LoadInst>(I).isAtomic()) {
       Vals.push_back(getEncodedOrdering(cast<LoadInst>(I).getOrdering()));
-      Vals.push_back(getEncodedSynchScope(cast<LoadInst>(I).getSynchScope()));
+      Vals.push_back(getEncodedSyncScopeID(cast<LoadInst>(I).getSyncScopeID()));
     }
     break;
   case Instruction::Store:
@@ -2672,7 +2687,8 @@ void ModuleBitcodeWriter::writeInstructi
     Vals.push_back(cast<StoreInst>(I).isVolatile());
     if (cast<StoreInst>(I).isAtomic()) {
       Vals.push_back(getEncodedOrdering(cast<StoreInst>(I).getOrdering()));
-      Vals.push_back(getEncodedSynchScope(cast<StoreInst>(I).getSynchScope()));
+      Vals.push_back(
+          getEncodedSyncScopeID(cast<StoreInst>(I).getSyncScopeID()));
     }
     break;
   case Instruction::AtomicCmpXchg:
@@ -2684,7 +2700,7 @@ void ModuleBitcodeWriter::writeInstructi
     Vals.push_back(
         getEncodedOrdering(cast<AtomicCmpXchgInst>(I).getSuccessOrdering()));
     Vals.push_back(
-        getEncodedSynchScope(cast<AtomicCmpXchgInst>(I).getSynchScope()));
+        getEncodedSyncScopeID(cast<AtomicCmpXchgInst>(I).getSyncScopeID()));
     Vals.push_back(
         getEncodedOrdering(cast<AtomicCmpXchgInst>(I).getFailureOrdering()));
     Vals.push_back(cast<AtomicCmpXchgInst>(I).isWeak());
@@ -2698,12 +2714,12 @@ void ModuleBitcodeWriter::writeInstructi
     Vals.push_back(cast<AtomicRMWInst>(I).isVolatile());
     Vals.push_back(getEncodedOrdering(cast<AtomicRMWInst>(I).getOrdering()));
     Vals.push_back(
-        getEncodedSynchScope(cast<AtomicRMWInst>(I).getSynchScope()));
+        getEncodedSyncScopeID(cast<AtomicRMWInst>(I).getSyncScopeID()));
     break;
   case Instruction::Fence:
     Code = bitc::FUNC_CODE_INST_FENCE;
     Vals.push_back(getEncodedOrdering(cast<FenceInst>(I).getOrdering()));
-    Vals.push_back(getEncodedSynchScope(cast<FenceInst>(I).getSynchScope()));
+    Vals.push_back(getEncodedSyncScopeID(cast<FenceInst>(I).getSyncScopeID()));
     break;
   case Instruction::Call: {
     const CallInst &CI = cast<CallInst>(I);
@@ -3716,6 +3732,7 @@ void ModuleBitcodeWriter::write() {
     writeUseListBlock(nullptr);
 
   writeOperandBundleTags();
+  writeSyncScopeNames();
 
   // Emit function bodies.
   DenseMap<const Function *, uint64_t> FunctionToBitcodeIndex;

Modified: llvm/trunk/lib/CodeGen/AtomicExpandPass.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/AtomicExpandPass.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/CodeGen/AtomicExpandPass.cpp (original)
+++ llvm/trunk/lib/CodeGen/AtomicExpandPass.cpp Tue Jul 11 15:23:00 2017
@@ -361,7 +361,7 @@ LoadInst *AtomicExpand::convertAtomicLoa
   auto *NewLI = Builder.CreateLoad(NewAddr);
   NewLI->setAlignment(LI->getAlignment());
   NewLI->setVolatile(LI->isVolatile());
-  NewLI->setAtomic(LI->getOrdering(), LI->getSynchScope());
+  NewLI->setAtomic(LI->getOrdering(), LI->getSyncScopeID());
   DEBUG(dbgs() << "Replaced " << *LI << " with " << *NewLI << "\n");
   
   Value *NewVal = Builder.CreateBitCast(NewLI, LI->getType());
@@ -444,7 +444,7 @@ StoreInst *AtomicExpand::convertAtomicSt
   StoreInst *NewSI = Builder.CreateStore(NewVal, NewAddr);
   NewSI->setAlignment(SI->getAlignment());
   NewSI->setVolatile(SI->isVolatile());
-  NewSI->setAtomic(SI->getOrdering(), SI->getSynchScope());
+  NewSI->setAtomic(SI->getOrdering(), SI->getSyncScopeID());
   DEBUG(dbgs() << "Replaced " << *SI << " with " << *NewSI << "\n");
   SI->eraseFromParent();
   return NewSI;
@@ -801,7 +801,7 @@ void AtomicExpand::expandPartwordCmpXchg
   Value *FullWord_Cmp = Builder.CreateOr(Loaded_MaskOut, Cmp_Shifted);
   AtomicCmpXchgInst *NewCI = Builder.CreateAtomicCmpXchg(
       PMV.AlignedAddr, FullWord_Cmp, FullWord_NewVal, CI->getSuccessOrdering(),
-      CI->getFailureOrdering(), CI->getSynchScope());
+      CI->getFailureOrdering(), CI->getSyncScopeID());
   NewCI->setVolatile(CI->isVolatile());
   // When we're building a strong cmpxchg, we need a loop, so you
   // might think we could use a weak cmpxchg inside. But, using strong
@@ -924,7 +924,7 @@ AtomicCmpXchgInst *AtomicExpand::convert
   auto *NewCI = Builder.CreateAtomicCmpXchg(NewAddr, NewCmp, NewNewVal,
                                             CI->getSuccessOrdering(),
                                             CI->getFailureOrdering(),
-                                            CI->getSynchScope());
+                                            CI->getSyncScopeID());
   NewCI->setVolatile(CI->isVolatile());
   NewCI->setWeak(CI->isWeak());
   DEBUG(dbgs() << "Replaced " << *CI << " with " << *NewCI << "\n");

Modified: llvm/trunk/lib/CodeGen/GlobalISel/IRTranslator.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/GlobalISel/IRTranslator.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/CodeGen/GlobalISel/IRTranslator.cpp (original)
+++ llvm/trunk/lib/CodeGen/GlobalISel/IRTranslator.cpp Tue Jul 11 15:23:00 2017
@@ -345,7 +345,7 @@ bool IRTranslator::translateLoad(const U
       *MF->getMachineMemOperand(MachinePointerInfo(LI.getPointerOperand()),
                                 Flags, DL->getTypeStoreSize(LI.getType()),
                                 getMemOpAlignment(LI), AAMDNodes(), nullptr,
-                                LI.getSynchScope(), LI.getOrdering()));
+                                LI.getSyncScopeID(), LI.getOrdering()));
   return true;
 }
 
@@ -363,7 +363,7 @@ bool IRTranslator::translateStore(const
       *MF->getMachineMemOperand(
           MachinePointerInfo(SI.getPointerOperand()), Flags,
           DL->getTypeStoreSize(SI.getValueOperand()->getType()),
-          getMemOpAlignment(SI), AAMDNodes(), nullptr, SI.getSynchScope(),
+          getMemOpAlignment(SI), AAMDNodes(), nullptr, SI.getSyncScopeID(),
           SI.getOrdering()));
   return true;
 }

Modified: llvm/trunk/lib/CodeGen/MIRParser/MILexer.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MIRParser/MILexer.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/CodeGen/MIRParser/MILexer.cpp (original)
+++ llvm/trunk/lib/CodeGen/MIRParser/MILexer.cpp Tue Jul 11 15:23:00 2017
@@ -365,6 +365,14 @@ static Cursor maybeLexIRValue(Cursor C,
   return lexName(C, Token, MIToken::NamedIRValue, Rule.size(), ErrorCallback);
 }
 
+static Cursor maybeLexStringConstant(Cursor C, MIToken &Token,
+                                     ErrorCallbackType ErrorCallback) {
+  if (C.peek() != '"')
+    return None;
+  return lexName(C, Token, MIToken::StringConstant, /*PrefixLength=*/0,
+                 ErrorCallback);
+}
+
 static Cursor lexVirtualRegister(Cursor C, MIToken &Token) {
   auto Range = C;
   C.advance(); // Skip '%'
@@ -630,6 +638,8 @@ StringRef llvm::lexMIToken(StringRef Sou
     return R.remaining();
   if (Cursor R = maybeLexEscapedIRValue(C, Token, ErrorCallback))
     return R.remaining();
+  if (Cursor R = maybeLexStringConstant(C, Token, ErrorCallback))
+    return R.remaining();
 
   Token.reset(MIToken::Error, C.remaining());
   ErrorCallback(C.location(),

Modified: llvm/trunk/lib/CodeGen/MIRParser/MILexer.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MIRParser/MILexer.h?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/CodeGen/MIRParser/MILexer.h (original)
+++ llvm/trunk/lib/CodeGen/MIRParser/MILexer.h Tue Jul 11 15:23:00 2017
@@ -127,7 +127,8 @@ struct MIToken {
     NamedIRValue,
     IRValue,
     QuotedIRValue, // `<constant value>`
-    SubRegisterIndex
+    SubRegisterIndex,
+    StringConstant
   };
 
 private:

Modified: llvm/trunk/lib/CodeGen/MIRParser/MIParser.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MIRParser/MIParser.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/CodeGen/MIRParser/MIParser.cpp (original)
+++ llvm/trunk/lib/CodeGen/MIRParser/MIParser.cpp Tue Jul 11 15:23:00 2017
@@ -229,6 +229,7 @@ public:
   bool parseMemoryOperandFlag(MachineMemOperand::Flags &Flags);
   bool parseMemoryPseudoSourceValue(const PseudoSourceValue *&PSV);
   bool parseMachinePointerInfo(MachinePointerInfo &Dest);
+  bool parseOptionalScope(LLVMContext &Context, SyncScope::ID &SSID);
   bool parseOptionalAtomicOrdering(AtomicOrdering &Order);
   bool parseMachineMemoryOperand(MachineMemOperand *&Dest);
 
@@ -318,6 +319,10 @@ private:
   ///
   /// Return true if the name isn't a name of a bitmask target flag.
   bool getBitmaskTargetFlag(StringRef Name, unsigned &Flag);
+
+  /// parseStringConstant
+  ///   ::= StringConstant
+  bool parseStringConstant(std::string &Result);
 };
 
 } // end anonymous namespace
@@ -2135,6 +2140,26 @@ bool MIParser::parseMachinePointerInfo(M
   return false;
 }
 
+bool MIParser::parseOptionalScope(LLVMContext &Context,
+                                  SyncScope::ID &SSID) {
+  SSID = SyncScope::System;
+  if (Token.is(MIToken::Identifier) && Token.stringValue() == "syncscope") {
+    lex();
+    if (expectAndConsume(MIToken::lparen))
+      return error("expected '(' in syncscope");
+
+    std::string SSN;
+    if (parseStringConstant(SSN))
+      return true;
+
+    SSID = Context.getOrInsertSyncScopeID(SSN);
+    if (expectAndConsume(MIToken::rparen))
+      return error("expected ')' in syncscope");
+  }
+
+  return false;
+}
+
 bool MIParser::parseOptionalAtomicOrdering(AtomicOrdering &Order) {
   Order = AtomicOrdering::NotAtomic;
   if (Token.isNot(MIToken::Identifier))
@@ -2174,12 +2199,10 @@ bool MIParser::parseMachineMemoryOperand
     Flags |= MachineMemOperand::MOStore;
   lex();
 
-  // Optional "singlethread" scope.
-  SynchronizationScope Scope = SynchronizationScope::CrossThread;
-  if (Token.is(MIToken::Identifier) && Token.stringValue() == "singlethread") {
-    Scope = SynchronizationScope::SingleThread;
-    lex();
-  }
+  // Optional synchronization scope.
+  SyncScope::ID SSID;
+  if (parseOptionalScope(MF.getFunction()->getContext(), SSID))
+    return true;
 
   // Up to two atomic orderings (cmpxchg provides guarantees on failure).
   AtomicOrdering Order, FailureOrder;
@@ -2244,7 +2267,7 @@ bool MIParser::parseMachineMemoryOperand
   if (expectAndConsume(MIToken::rparen))
     return true;
   Dest = MF.getMachineMemOperand(Ptr, Flags, Size, BaseAlignment, AAInfo, Range,
-                                 Scope, Order, FailureOrder);
+                                 SSID, Order, FailureOrder);
   return false;
 }
 
@@ -2457,6 +2480,14 @@ bool MIParser::getBitmaskTargetFlag(Stri
   return false;
 }
 
+bool MIParser::parseStringConstant(std::string &Result) {
+  if (Token.isNot(MIToken::StringConstant))
+    return error("expected string constant");
+  Result = Token.stringValue();
+  lex();
+  return false;
+}
+
 bool llvm::parseMachineBasicBlockDefinitions(PerFunctionMIParsingState &PFS,
                                              StringRef Src,
                                              SMDiagnostic &Error) {

Modified: llvm/trunk/lib/CodeGen/MIRPrinter.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MIRPrinter.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/CodeGen/MIRPrinter.cpp (original)
+++ llvm/trunk/lib/CodeGen/MIRPrinter.cpp Tue Jul 11 15:23:00 2017
@@ -18,6 +18,7 @@
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/ADT/SmallVector.h"
 #include "llvm/ADT/STLExtras.h"
+#include "llvm/ADT/StringExtras.h"
 #include "llvm/ADT/StringRef.h"
 #include "llvm/ADT/Twine.h"
 #include "llvm/CodeGen/GlobalISel/RegisterBank.h"
@@ -139,6 +140,8 @@ class MIPrinter {
   ModuleSlotTracker &MST;
   const DenseMap<const uint32_t *, unsigned> &RegisterMaskIds;
   const DenseMap<int, FrameIndexOperand> &StackObjectOperandMapping;
+  /// Synchronization scope names registered with LLVMContext.
+  SmallVector<StringRef, 8> SSNs;
 
   bool canPredictBranchProbabilities(const MachineBasicBlock &MBB) const;
   bool canPredictSuccessors(const MachineBasicBlock &MBB) const;
@@ -162,7 +165,8 @@ public:
   void print(const MachineOperand &Op, const TargetRegisterInfo *TRI,
              unsigned I, bool ShouldPrintRegisterTies,
              LLT TypeToPrint, bool IsDef = false);
-  void print(const MachineMemOperand &Op);
+  void print(const LLVMContext &Context, const MachineMemOperand &Op);
+  void printSyncScope(const LLVMContext &Context, SyncScope::ID SSID);
 
   void print(const MCCFIInstruction &CFI, const TargetRegisterInfo *TRI);
 };
@@ -731,11 +735,12 @@ void MIPrinter::print(const MachineInstr
 
   if (!MI.memoperands_empty()) {
     OS << " :: ";
+    const LLVMContext &Context = MF->getFunction()->getContext();
     bool NeedComma = false;
     for (const auto *Op : MI.memoperands()) {
       if (NeedComma)
         OS << ", ";
-      print(*Op);
+      print(Context, *Op);
       NeedComma = true;
     }
   }
@@ -1031,7 +1036,7 @@ void MIPrinter::print(const MachineOpera
   }
 }
 
-void MIPrinter::print(const MachineMemOperand &Op) {
+void MIPrinter::print(const LLVMContext &Context, const MachineMemOperand &Op) {
   OS << '(';
   // TODO: Print operand's target specific flags.
   if (Op.isVolatile())
@@ -1049,8 +1054,7 @@ void MIPrinter::print(const MachineMemOp
     OS << "store ";
   }
 
-  if (Op.getSynchScope() == SynchronizationScope::SingleThread)
-    OS << "singlethread ";
+  printSyncScope(Context, Op.getSyncScopeID());
 
   if (Op.getOrdering() != AtomicOrdering::NotAtomic)
     OS << toIRString(Op.getOrdering()) << ' ';
@@ -1119,6 +1123,23 @@ void MIPrinter::print(const MachineMemOp
   OS << ')';
 }
 
+void MIPrinter::printSyncScope(const LLVMContext &Context, SyncScope::ID SSID) {
+  switch (SSID) {
+  case SyncScope::System: {
+    break;
+  }
+  default: {
+    if (SSNs.empty())
+      Context.getSyncScopeNames(SSNs);
+
+    OS << "syncscope(\"";
+    PrintEscapedString(SSNs[SSID], OS);
+    OS << "\") ";
+    break;
+  }
+  }
+}
+
 static void printCFIRegister(unsigned DwarfReg, raw_ostream &OS,
                              const TargetRegisterInfo *TRI) {
   int Reg = TRI->getLLVMRegNum(DwarfReg, true);

Modified: llvm/trunk/lib/CodeGen/MachineFunction.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MachineFunction.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/CodeGen/MachineFunction.cpp (original)
+++ llvm/trunk/lib/CodeGen/MachineFunction.cpp Tue Jul 11 15:23:00 2017
@@ -305,11 +305,11 @@ MachineFunction::DeleteMachineBasicBlock
 MachineMemOperand *MachineFunction::getMachineMemOperand(
     MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s,
     unsigned base_alignment, const AAMDNodes &AAInfo, const MDNode *Ranges,
-    SynchronizationScope SynchScope, AtomicOrdering Ordering,
+    SyncScope::ID SSID, AtomicOrdering Ordering,
     AtomicOrdering FailureOrdering) {
   return new (Allocator)
       MachineMemOperand(PtrInfo, f, s, base_alignment, AAInfo, Ranges,
-                        SynchScope, Ordering, FailureOrdering);
+                        SSID, Ordering, FailureOrdering);
 }
 
 MachineMemOperand *
@@ -320,13 +320,13 @@ MachineFunction::getMachineMemOperand(co
                MachineMemOperand(MachinePointerInfo(MMO->getValue(),
                                                     MMO->getOffset()+Offset),
                                  MMO->getFlags(), Size, MMO->getBaseAlignment(),
-                                 AAMDNodes(), nullptr, MMO->getSynchScope(),
+                                 AAMDNodes(), nullptr, MMO->getSyncScopeID(),
                                  MMO->getOrdering(), MMO->getFailureOrdering());
   return new (Allocator)
              MachineMemOperand(MachinePointerInfo(MMO->getPseudoValue(),
                                                   MMO->getOffset()+Offset),
                                MMO->getFlags(), Size, MMO->getBaseAlignment(),
-                               AAMDNodes(), nullptr, MMO->getSynchScope(),
+                               AAMDNodes(), nullptr, MMO->getSyncScopeID(),
                                MMO->getOrdering(), MMO->getFailureOrdering());
 }
 
@@ -359,7 +359,7 @@ MachineFunction::extractLoadMemRefs(Mach
                                (*I)->getFlags() & ~MachineMemOperand::MOStore,
                                (*I)->getSize(), (*I)->getBaseAlignment(),
                                (*I)->getAAInfo(), nullptr,
-                               (*I)->getSynchScope(), (*I)->getOrdering(),
+                               (*I)->getSyncScopeID(), (*I)->getOrdering(),
                                (*I)->getFailureOrdering());
         Result[Index] = JustLoad;
       }
@@ -393,7 +393,7 @@ MachineFunction::extractStoreMemRefs(Mac
                                (*I)->getFlags() & ~MachineMemOperand::MOLoad,
                                (*I)->getSize(), (*I)->getBaseAlignment(),
                                (*I)->getAAInfo(), nullptr,
-                               (*I)->getSynchScope(), (*I)->getOrdering(),
+                               (*I)->getSyncScopeID(), (*I)->getOrdering(),
                                (*I)->getFailureOrdering());
         Result[Index] = JustStore;
       }

Modified: llvm/trunk/lib/CodeGen/MachineInstr.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/MachineInstr.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/CodeGen/MachineInstr.cpp (original)
+++ llvm/trunk/lib/CodeGen/MachineInstr.cpp Tue Jul 11 15:23:00 2017
@@ -614,7 +614,7 @@ MachineMemOperand::MachineMemOperand(Mac
                                      uint64_t s, unsigned int a,
                                      const AAMDNodes &AAInfo,
                                      const MDNode *Ranges,
-                                     SynchronizationScope SynchScope,
+                                     SyncScope::ID SSID,
                                      AtomicOrdering Ordering,
                                      AtomicOrdering FailureOrdering)
     : PtrInfo(ptrinfo), Size(s), FlagVals(f), BaseAlignLog2(Log2_32(a) + 1),
@@ -625,8 +625,8 @@ MachineMemOperand::MachineMemOperand(Mac
   assert(getBaseAlignment() == a && "Alignment is not a power of 2!");
   assert((isLoad() || isStore()) && "Not a load/store!");
 
-  AtomicInfo.SynchScope = static_cast<unsigned>(SynchScope);
-  assert(getSynchScope() == SynchScope && "Value truncated");
+  AtomicInfo.SSID = static_cast<unsigned>(SSID);
+  assert(getSyncScopeID() == SSID && "Value truncated");
   AtomicInfo.Ordering = static_cast<unsigned>(Ordering);
   assert(getOrdering() == Ordering && "Value truncated");
   AtomicInfo.FailureOrdering = static_cast<unsigned>(FailureOrdering);

Modified: llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAG.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAG.cpp (original)
+++ llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAG.cpp Tue Jul 11 15:23:00 2017
@@ -5443,7 +5443,7 @@ SDValue SelectionDAG::getAtomicCmpSwap(
     unsigned Opcode, const SDLoc &dl, EVT MemVT, SDVTList VTs, SDValue Chain,
     SDValue Ptr, SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo,
     unsigned Alignment, AtomicOrdering SuccessOrdering,
-    AtomicOrdering FailureOrdering, SynchronizationScope SynchScope) {
+    AtomicOrdering FailureOrdering, SyncScope::ID SSID) {
   assert(Opcode == ISD::ATOMIC_CMP_SWAP ||
          Opcode == ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS);
   assert(Cmp.getValueType() == Swp.getValueType() && "Invalid Atomic Op Types");
@@ -5459,7 +5459,7 @@ SDValue SelectionDAG::getAtomicCmpSwap(
                MachineMemOperand::MOStore;
   MachineMemOperand *MMO =
     MF.getMachineMemOperand(PtrInfo, Flags, MemVT.getStoreSize(), Alignment,
-                            AAMDNodes(), nullptr, SynchScope, SuccessOrdering,
+                            AAMDNodes(), nullptr, SSID, SuccessOrdering,
                             FailureOrdering);
 
   return getAtomicCmpSwap(Opcode, dl, MemVT, VTs, Chain, Ptr, Cmp, Swp, MMO);
@@ -5481,7 +5481,7 @@ SDValue SelectionDAG::getAtomic(unsigned
                                 SDValue Chain, SDValue Ptr, SDValue Val,
                                 const Value *PtrVal, unsigned Alignment,
                                 AtomicOrdering Ordering,
-                                SynchronizationScope SynchScope) {
+                                SyncScope::ID SSID) {
   if (Alignment == 0)  // Ensure that codegen never sees alignment 0
     Alignment = getEVTAlignment(MemVT);
 
@@ -5501,7 +5501,7 @@ SDValue SelectionDAG::getAtomic(unsigned
   MachineMemOperand *MMO =
     MF.getMachineMemOperand(MachinePointerInfo(PtrVal), Flags,
                             MemVT.getStoreSize(), Alignment, AAMDNodes(),
-                            nullptr, SynchScope, Ordering);
+                            nullptr, SSID, Ordering);
 
   return getAtomic(Opcode, dl, MemVT, Chain, Ptr, Val, MMO);
 }

Modified: llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp (original)
+++ llvm/trunk/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp Tue Jul 11 15:23:00 2017
@@ -3990,7 +3990,7 @@ void SelectionDAGBuilder::visitAtomicCmp
   SDLoc dl = getCurSDLoc();
   AtomicOrdering SuccessOrder = I.getSuccessOrdering();
   AtomicOrdering FailureOrder = I.getFailureOrdering();
-  SynchronizationScope Scope = I.getSynchScope();
+  SyncScope::ID SSID = I.getSyncScopeID();
 
   SDValue InChain = getRoot();
 
@@ -4000,7 +4000,7 @@ void SelectionDAGBuilder::visitAtomicCmp
       ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS, dl, MemVT, VTs, InChain,
       getValue(I.getPointerOperand()), getValue(I.getCompareOperand()),
       getValue(I.getNewValOperand()), MachinePointerInfo(I.getPointerOperand()),
-      /*Alignment=*/ 0, SuccessOrder, FailureOrder, Scope);
+      /*Alignment=*/ 0, SuccessOrder, FailureOrder, SSID);
 
   SDValue OutChain = L.getValue(2);
 
@@ -4026,7 +4026,7 @@ void SelectionDAGBuilder::visitAtomicRMW
   case AtomicRMWInst::UMin: NT = ISD::ATOMIC_LOAD_UMIN; break;
   }
   AtomicOrdering Order = I.getOrdering();
-  SynchronizationScope Scope = I.getSynchScope();
+  SyncScope::ID SSID = I.getSyncScopeID();
 
   SDValue InChain = getRoot();
 
@@ -4037,7 +4037,7 @@ void SelectionDAGBuilder::visitAtomicRMW
                   getValue(I.getPointerOperand()),
                   getValue(I.getValOperand()),
                   I.getPointerOperand(),
-                  /* Alignment=*/ 0, Order, Scope);
+                  /* Alignment=*/ 0, Order, SSID);
 
   SDValue OutChain = L.getValue(1);
 
@@ -4052,7 +4052,7 @@ void SelectionDAGBuilder::visitFence(con
   Ops[0] = getRoot();
   Ops[1] = DAG.getConstant((unsigned)I.getOrdering(), dl,
                            TLI.getFenceOperandTy(DAG.getDataLayout()));
-  Ops[2] = DAG.getConstant(I.getSynchScope(), dl,
+  Ops[2] = DAG.getConstant(I.getSyncScopeID(), dl,
                            TLI.getFenceOperandTy(DAG.getDataLayout()));
   DAG.setRoot(DAG.getNode(ISD::ATOMIC_FENCE, dl, MVT::Other, Ops));
 }
@@ -4060,7 +4060,7 @@ void SelectionDAGBuilder::visitFence(con
 void SelectionDAGBuilder::visitAtomicLoad(const LoadInst &I) {
   SDLoc dl = getCurSDLoc();
   AtomicOrdering Order = I.getOrdering();
-  SynchronizationScope Scope = I.getSynchScope();
+  SyncScope::ID SSID = I.getSyncScopeID();
 
   SDValue InChain = getRoot();
 
@@ -4078,7 +4078,7 @@ void SelectionDAGBuilder::visitAtomicLoa
                            VT.getStoreSize(),
                            I.getAlignment() ? I.getAlignment() :
                                               DAG.getEVTAlignment(VT),
-                           AAMDNodes(), nullptr, Scope, Order);
+                           AAMDNodes(), nullptr, SSID, Order);
 
   InChain = TLI.prepareVolatileOrAtomicLoad(InChain, dl, DAG);
   SDValue L =
@@ -4095,7 +4095,7 @@ void SelectionDAGBuilder::visitAtomicSto
   SDLoc dl = getCurSDLoc();
 
   AtomicOrdering Order = I.getOrdering();
-  SynchronizationScope Scope = I.getSynchScope();
+  SyncScope::ID SSID = I.getSyncScopeID();
 
   SDValue InChain = getRoot();
 
@@ -4112,7 +4112,7 @@ void SelectionDAGBuilder::visitAtomicSto
                   getValue(I.getPointerOperand()),
                   getValue(I.getValueOperand()),
                   I.getPointerOperand(), I.getAlignment(),
-                  Order, Scope);
+                  Order, SSID);
 
   DAG.setRoot(OutChain);
 }

Modified: llvm/trunk/lib/IR/AsmWriter.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/IR/AsmWriter.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/IR/AsmWriter.cpp (original)
+++ llvm/trunk/lib/IR/AsmWriter.cpp Tue Jul 11 15:23:00 2017
@@ -2119,6 +2119,8 @@ class AssemblyWriter {
   bool ShouldPreserveUseListOrder;
   UseListOrderStack UseListOrders;
   SmallVector<StringRef, 8> MDNames;
+  /// Synchronization scope names registered with LLVMContext.
+  SmallVector<StringRef, 8> SSNs;
 
 public:
   /// Construct an AssemblyWriter with an external SlotTracker
@@ -2134,10 +2136,15 @@ public:
   void writeOperand(const Value *Op, bool PrintType);
   void writeParamOperand(const Value *Operand, AttributeSet Attrs);
   void writeOperandBundles(ImmutableCallSite CS);
-  void writeAtomic(AtomicOrdering Ordering, SynchronizationScope SynchScope);
-  void writeAtomicCmpXchg(AtomicOrdering SuccessOrdering,
+  void writeSyncScope(const LLVMContext &Context,
+                      SyncScope::ID SSID);
+  void writeAtomic(const LLVMContext &Context,
+                   AtomicOrdering Ordering,
+                   SyncScope::ID SSID);
+  void writeAtomicCmpXchg(const LLVMContext &Context,
+                          AtomicOrdering SuccessOrdering,
                           AtomicOrdering FailureOrdering,
-                          SynchronizationScope SynchScope);
+                          SyncScope::ID SSID);
 
   void writeAllMDNodes();
   void writeMDNode(unsigned Slot, const MDNode *Node);
@@ -2199,30 +2206,42 @@ void AssemblyWriter::writeOperand(const
   WriteAsOperandInternal(Out, Operand, &TypePrinter, &Machine, TheModule);
 }
 
-void AssemblyWriter::writeAtomic(AtomicOrdering Ordering,
-                                 SynchronizationScope SynchScope) {
-  if (Ordering == AtomicOrdering::NotAtomic)
-    return;
+void AssemblyWriter::writeSyncScope(const LLVMContext &Context,
+                                    SyncScope::ID SSID) {
+  switch (SSID) {
+  case SyncScope::System: {
+    break;
+  }
+  default: {
+    if (SSNs.empty())
+      Context.getSyncScopeNames(SSNs);
 
-  switch (SynchScope) {
-  case SingleThread: Out << " singlethread"; break;
-  case CrossThread: break;
+    Out << " syncscope(\"";
+    PrintEscapedString(SSNs[SSID], Out);
+    Out << "\")";
+    break;
   }
+  }
+}
 
+void AssemblyWriter::writeAtomic(const LLVMContext &Context,
+                                 AtomicOrdering Ordering,
+                                 SyncScope::ID SSID) {
+  if (Ordering == AtomicOrdering::NotAtomic)
+    return;
+
+  writeSyncScope(Context, SSID);
   Out << " " << toIRString(Ordering);
 }
 
-void AssemblyWriter::writeAtomicCmpXchg(AtomicOrdering SuccessOrdering,
+void AssemblyWriter::writeAtomicCmpXchg(const LLVMContext &Context,
+                                        AtomicOrdering SuccessOrdering,
                                         AtomicOrdering FailureOrdering,
-                                        SynchronizationScope SynchScope) {
+                                        SyncScope::ID SSID) {
   assert(SuccessOrdering != AtomicOrdering::NotAtomic &&
          FailureOrdering != AtomicOrdering::NotAtomic);
 
-  switch (SynchScope) {
-  case SingleThread: Out << " singlethread"; break;
-  case CrossThread: break;
-  }
-
+  writeSyncScope(Context, SSID);
   Out << " " << toIRString(SuccessOrdering);
   Out << " " << toIRString(FailureOrdering);
 }
@@ -3215,21 +3234,22 @@ void AssemblyWriter::printInstruction(co
   // Print atomic ordering/alignment for memory operations
   if (const LoadInst *LI = dyn_cast<LoadInst>(&I)) {
     if (LI->isAtomic())
-      writeAtomic(LI->getOrdering(), LI->getSynchScope());
+      writeAtomic(LI->getContext(), LI->getOrdering(), LI->getSyncScopeID());
     if (LI->getAlignment())
       Out << ", align " << LI->getAlignment();
   } else if (const StoreInst *SI = dyn_cast<StoreInst>(&I)) {
     if (SI->isAtomic())
-      writeAtomic(SI->getOrdering(), SI->getSynchScope());
+      writeAtomic(SI->getContext(), SI->getOrdering(), SI->getSyncScopeID());
     if (SI->getAlignment())
       Out << ", align " << SI->getAlignment();
   } else if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(&I)) {
-    writeAtomicCmpXchg(CXI->getSuccessOrdering(), CXI->getFailureOrdering(),
-                       CXI->getSynchScope());
+    writeAtomicCmpXchg(CXI->getContext(), CXI->getSuccessOrdering(),
+                       CXI->getFailureOrdering(), CXI->getSyncScopeID());
   } else if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(&I)) {
-    writeAtomic(RMWI->getOrdering(), RMWI->getSynchScope());
+    writeAtomic(RMWI->getContext(), RMWI->getOrdering(),
+                RMWI->getSyncScopeID());
   } else if (const FenceInst *FI = dyn_cast<FenceInst>(&I)) {
-    writeAtomic(FI->getOrdering(), FI->getSynchScope());
+    writeAtomic(FI->getContext(), FI->getOrdering(), FI->getSyncScopeID());
   }
 
   // Print Metadata info.

Modified: llvm/trunk/lib/IR/Core.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/IR/Core.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/IR/Core.cpp (original)
+++ llvm/trunk/lib/IR/Core.cpp Tue Jul 11 15:23:00 2017
@@ -2756,11 +2756,14 @@ static LLVMAtomicOrdering mapToLLVMOrder
   llvm_unreachable("Invalid AtomicOrdering value!");
 }
 
+// TODO: Should this and other atomic instructions support building with
+// "syncscope"?
 LLVMValueRef LLVMBuildFence(LLVMBuilderRef B, LLVMAtomicOrdering Ordering,
                             LLVMBool isSingleThread, const char *Name) {
   return wrap(
     unwrap(B)->CreateFence(mapFromLLVMOrdering(Ordering),
-                           isSingleThread ? SingleThread : CrossThread,
+                           isSingleThread ? SyncScope::SingleThread
+                                          : SyncScope::System,
                            Name));
 }
 
@@ -3042,7 +3045,8 @@ LLVMValueRef LLVMBuildAtomicRMW(LLVMBuil
     case LLVMAtomicRMWBinOpUMin: intop = AtomicRMWInst::UMin; break;
   }
   return wrap(unwrap(B)->CreateAtomicRMW(intop, unwrap(PTR), unwrap(Val),
-    mapFromLLVMOrdering(ordering), singleThread ? SingleThread : CrossThread));
+    mapFromLLVMOrdering(ordering), singleThread ? SyncScope::SingleThread
+                                                : SyncScope::System));
 }
 
 LLVMValueRef LLVMBuildAtomicCmpXchg(LLVMBuilderRef B, LLVMValueRef Ptr,
@@ -3054,7 +3058,7 @@ LLVMValueRef LLVMBuildAtomicCmpXchg(LLVM
   return wrap(unwrap(B)->CreateAtomicCmpXchg(unwrap(Ptr), unwrap(Cmp),
                 unwrap(New), mapFromLLVMOrdering(SuccessOrdering),
                 mapFromLLVMOrdering(FailureOrdering),
-                singleThread ? SingleThread : CrossThread));
+                singleThread ? SyncScope::SingleThread : SyncScope::System));
 }
 
 
@@ -3062,17 +3066,18 @@ LLVMBool LLVMIsAtomicSingleThread(LLVMVa
   Value *P = unwrap<Value>(AtomicInst);
 
   if (AtomicRMWInst *I = dyn_cast<AtomicRMWInst>(P))
-    return I->getSynchScope() == SingleThread;
-  return cast<AtomicCmpXchgInst>(P)->getSynchScope() == SingleThread;
+    return I->getSyncScopeID() == SyncScope::SingleThread;
+  return cast<AtomicCmpXchgInst>(P)->getSyncScopeID() ==
+             SyncScope::SingleThread;
 }
 
 void LLVMSetAtomicSingleThread(LLVMValueRef AtomicInst, LLVMBool NewValue) {
   Value *P = unwrap<Value>(AtomicInst);
-  SynchronizationScope Sync = NewValue ? SingleThread : CrossThread;
+  SyncScope::ID SSID = NewValue ? SyncScope::SingleThread : SyncScope::System;
 
   if (AtomicRMWInst *I = dyn_cast<AtomicRMWInst>(P))
-    return I->setSynchScope(Sync);
-  return cast<AtomicCmpXchgInst>(P)->setSynchScope(Sync);
+    return I->setSyncScopeID(SSID);
+  return cast<AtomicCmpXchgInst>(P)->setSyncScopeID(SSID);
 }
 
 LLVMAtomicOrdering LLVMGetCmpXchgSuccessOrdering(LLVMValueRef CmpXchgInst)  {

Modified: llvm/trunk/lib/IR/Instruction.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/IR/Instruction.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/IR/Instruction.cpp (original)
+++ llvm/trunk/lib/IR/Instruction.cpp Tue Jul 11 15:23:00 2017
@@ -362,13 +362,13 @@ static bool haveSameSpecialState(const I
            (LI->getAlignment() == cast<LoadInst>(I2)->getAlignment() ||
             IgnoreAlignment) &&
            LI->getOrdering() == cast<LoadInst>(I2)->getOrdering() &&
-           LI->getSynchScope() == cast<LoadInst>(I2)->getSynchScope();
+           LI->getSyncScopeID() == cast<LoadInst>(I2)->getSyncScopeID();
   if (const StoreInst *SI = dyn_cast<StoreInst>(I1))
     return SI->isVolatile() == cast<StoreInst>(I2)->isVolatile() &&
            (SI->getAlignment() == cast<StoreInst>(I2)->getAlignment() ||
             IgnoreAlignment) &&
            SI->getOrdering() == cast<StoreInst>(I2)->getOrdering() &&
-           SI->getSynchScope() == cast<StoreInst>(I2)->getSynchScope();
+           SI->getSyncScopeID() == cast<StoreInst>(I2)->getSyncScopeID();
   if (const CmpInst *CI = dyn_cast<CmpInst>(I1))
     return CI->getPredicate() == cast<CmpInst>(I2)->getPredicate();
   if (const CallInst *CI = dyn_cast<CallInst>(I1))
@@ -386,7 +386,7 @@ static bool haveSameSpecialState(const I
     return EVI->getIndices() == cast<ExtractValueInst>(I2)->getIndices();
   if (const FenceInst *FI = dyn_cast<FenceInst>(I1))
     return FI->getOrdering() == cast<FenceInst>(I2)->getOrdering() &&
-           FI->getSynchScope() == cast<FenceInst>(I2)->getSynchScope();
+           FI->getSyncScopeID() == cast<FenceInst>(I2)->getSyncScopeID();
   if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(I1))
     return CXI->isVolatile() == cast<AtomicCmpXchgInst>(I2)->isVolatile() &&
            CXI->isWeak() == cast<AtomicCmpXchgInst>(I2)->isWeak() &&
@@ -394,12 +394,13 @@ static bool haveSameSpecialState(const I
                cast<AtomicCmpXchgInst>(I2)->getSuccessOrdering() &&
            CXI->getFailureOrdering() ==
                cast<AtomicCmpXchgInst>(I2)->getFailureOrdering() &&
-           CXI->getSynchScope() == cast<AtomicCmpXchgInst>(I2)->getSynchScope();
+           CXI->getSyncScopeID() ==
+               cast<AtomicCmpXchgInst>(I2)->getSyncScopeID();
   if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(I1))
     return RMWI->getOperation() == cast<AtomicRMWInst>(I2)->getOperation() &&
            RMWI->isVolatile() == cast<AtomicRMWInst>(I2)->isVolatile() &&
            RMWI->getOrdering() == cast<AtomicRMWInst>(I2)->getOrdering() &&
-           RMWI->getSynchScope() == cast<AtomicRMWInst>(I2)->getSynchScope();
+           RMWI->getSyncScopeID() == cast<AtomicRMWInst>(I2)->getSyncScopeID();
 
   return true;
 }

Modified: llvm/trunk/lib/IR/Instructions.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/IR/Instructions.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/IR/Instructions.cpp (original)
+++ llvm/trunk/lib/IR/Instructions.cpp Tue Jul 11 15:23:00 2017
@@ -1304,34 +1304,34 @@ LoadInst::LoadInst(Value *Ptr, const Twi
 LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile,
                    unsigned Align, Instruction *InsertBef)
     : LoadInst(Ty, Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic,
-               CrossThread, InsertBef) {}
+               SyncScope::System, InsertBef) {}
 
 LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile,
                    unsigned Align, BasicBlock *InsertAE)
     : LoadInst(Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic,
-               CrossThread, InsertAE) {}
+               SyncScope::System, InsertAE) {}
 
 LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile,
                    unsigned Align, AtomicOrdering Order,
-                   SynchronizationScope SynchScope, Instruction *InsertBef)
+                   SyncScope::ID SSID, Instruction *InsertBef)
     : UnaryInstruction(Ty, Load, Ptr, InsertBef) {
   assert(Ty == cast<PointerType>(Ptr->getType())->getElementType());
   setVolatile(isVolatile);
   setAlignment(Align);
-  setAtomic(Order, SynchScope);
+  setAtomic(Order, SSID);
   AssertOK();
   setName(Name);
 }
 
 LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile, 
                    unsigned Align, AtomicOrdering Order,
-                   SynchronizationScope SynchScope,
+                   SyncScope::ID SSID,
                    BasicBlock *InsertAE)
   : UnaryInstruction(cast<PointerType>(Ptr->getType())->getElementType(),
                      Load, Ptr, InsertAE) {
   setVolatile(isVolatile);
   setAlignment(Align);
-  setAtomic(Order, SynchScope);
+  setAtomic(Order, SSID);
   AssertOK();
   setName(Name);
 }
@@ -1419,16 +1419,16 @@ StoreInst::StoreInst(Value *val, Value *
 StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align,
                      Instruction *InsertBefore)
     : StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic,
-                CrossThread, InsertBefore) {}
+                SyncScope::System, InsertBefore) {}
 
 StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align,
                      BasicBlock *InsertAtEnd)
     : StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic,
-                CrossThread, InsertAtEnd) {}
+                SyncScope::System, InsertAtEnd) {}
 
 StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
                      unsigned Align, AtomicOrdering Order,
-                     SynchronizationScope SynchScope,
+                     SyncScope::ID SSID,
                      Instruction *InsertBefore)
   : Instruction(Type::getVoidTy(val->getContext()), Store,
                 OperandTraits<StoreInst>::op_begin(this),
@@ -1438,13 +1438,13 @@ StoreInst::StoreInst(Value *val, Value *
   Op<1>() = addr;
   setVolatile(isVolatile);
   setAlignment(Align);
-  setAtomic(Order, SynchScope);
+  setAtomic(Order, SSID);
   AssertOK();
 }
 
 StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
                      unsigned Align, AtomicOrdering Order,
-                     SynchronizationScope SynchScope,
+                     SyncScope::ID SSID,
                      BasicBlock *InsertAtEnd)
   : Instruction(Type::getVoidTy(val->getContext()), Store,
                 OperandTraits<StoreInst>::op_begin(this),
@@ -1454,7 +1454,7 @@ StoreInst::StoreInst(Value *val, Value *
   Op<1>() = addr;
   setVolatile(isVolatile);
   setAlignment(Align);
-  setAtomic(Order, SynchScope);
+  setAtomic(Order, SSID);
   AssertOK();
 }
 
@@ -1474,13 +1474,13 @@ void StoreInst::setAlignment(unsigned Al
 void AtomicCmpXchgInst::Init(Value *Ptr, Value *Cmp, Value *NewVal,
                              AtomicOrdering SuccessOrdering,
                              AtomicOrdering FailureOrdering,
-                             SynchronizationScope SynchScope) {
+                             SyncScope::ID SSID) {
   Op<0>() = Ptr;
   Op<1>() = Cmp;
   Op<2>() = NewVal;
   setSuccessOrdering(SuccessOrdering);
   setFailureOrdering(FailureOrdering);
-  setSynchScope(SynchScope);
+  setSyncScopeID(SSID);
 
   assert(getOperand(0) && getOperand(1) && getOperand(2) &&
          "All operands must be non-null!");
@@ -1507,25 +1507,25 @@ void AtomicCmpXchgInst::Init(Value *Ptr,
 AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
                                      AtomicOrdering SuccessOrdering,
                                      AtomicOrdering FailureOrdering,
-                                     SynchronizationScope SynchScope,
+                                     SyncScope::ID SSID,
                                      Instruction *InsertBefore)
     : Instruction(
           StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext())),
           AtomicCmpXchg, OperandTraits<AtomicCmpXchgInst>::op_begin(this),
           OperandTraits<AtomicCmpXchgInst>::operands(this), InsertBefore) {
-  Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SynchScope);
+  Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SSID);
 }
 
 AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
                                      AtomicOrdering SuccessOrdering,
                                      AtomicOrdering FailureOrdering,
-                                     SynchronizationScope SynchScope,
+                                     SyncScope::ID SSID,
                                      BasicBlock *InsertAtEnd)
     : Instruction(
           StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext())),
           AtomicCmpXchg, OperandTraits<AtomicCmpXchgInst>::op_begin(this),
           OperandTraits<AtomicCmpXchgInst>::operands(this), InsertAtEnd) {
-  Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SynchScope);
+  Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SSID);
 }
 
 //===----------------------------------------------------------------------===//
@@ -1534,12 +1534,12 @@ AtomicCmpXchgInst::AtomicCmpXchgInst(Val
 
 void AtomicRMWInst::Init(BinOp Operation, Value *Ptr, Value *Val,
                          AtomicOrdering Ordering,
-                         SynchronizationScope SynchScope) {
+                         SyncScope::ID SSID) {
   Op<0>() = Ptr;
   Op<1>() = Val;
   setOperation(Operation);
   setOrdering(Ordering);
-  setSynchScope(SynchScope);
+  setSyncScopeID(SSID);
 
   assert(getOperand(0) && getOperand(1) &&
          "All operands must be non-null!");
@@ -1554,24 +1554,24 @@ void AtomicRMWInst::Init(BinOp Operation
 
 AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
                              AtomicOrdering Ordering,
-                             SynchronizationScope SynchScope,
+                             SyncScope::ID SSID,
                              Instruction *InsertBefore)
   : Instruction(Val->getType(), AtomicRMW,
                 OperandTraits<AtomicRMWInst>::op_begin(this),
                 OperandTraits<AtomicRMWInst>::operands(this),
                 InsertBefore) {
-  Init(Operation, Ptr, Val, Ordering, SynchScope);
+  Init(Operation, Ptr, Val, Ordering, SSID);
 }
 
 AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
                              AtomicOrdering Ordering,
-                             SynchronizationScope SynchScope,
+                             SyncScope::ID SSID,
                              BasicBlock *InsertAtEnd)
   : Instruction(Val->getType(), AtomicRMW,
                 OperandTraits<AtomicRMWInst>::op_begin(this),
                 OperandTraits<AtomicRMWInst>::operands(this),
                 InsertAtEnd) {
-  Init(Operation, Ptr, Val, Ordering, SynchScope);
+  Init(Operation, Ptr, Val, Ordering, SSID);
 }
 
 //===----------------------------------------------------------------------===//
@@ -1579,19 +1579,19 @@ AtomicRMWInst::AtomicRMWInst(BinOp Opera
 //===----------------------------------------------------------------------===//
 
 FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering, 
-                     SynchronizationScope SynchScope,
+                     SyncScope::ID SSID,
                      Instruction *InsertBefore)
   : Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertBefore) {
   setOrdering(Ordering);
-  setSynchScope(SynchScope);
+  setSyncScopeID(SSID);
 }
 
 FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering, 
-                     SynchronizationScope SynchScope,
+                     SyncScope::ID SSID,
                      BasicBlock *InsertAtEnd)
   : Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertAtEnd) {
   setOrdering(Ordering);
-  setSynchScope(SynchScope);
+  setSyncScopeID(SSID);
 }
 
 //===----------------------------------------------------------------------===//
@@ -3795,12 +3795,12 @@ AllocaInst *AllocaInst::cloneImpl() cons
 
 LoadInst *LoadInst::cloneImpl() const {
   return new LoadInst(getOperand(0), Twine(), isVolatile(),
-                      getAlignment(), getOrdering(), getSynchScope());
+                      getAlignment(), getOrdering(), getSyncScopeID());
 }
 
 StoreInst *StoreInst::cloneImpl() const {
   return new StoreInst(getOperand(0), getOperand(1), isVolatile(),
-                       getAlignment(), getOrdering(), getSynchScope());
+                       getAlignment(), getOrdering(), getSyncScopeID());
   
 }
 
@@ -3808,7 +3808,7 @@ AtomicCmpXchgInst *AtomicCmpXchgInst::cl
   AtomicCmpXchgInst *Result =
     new AtomicCmpXchgInst(getOperand(0), getOperand(1), getOperand(2),
                           getSuccessOrdering(), getFailureOrdering(),
-                          getSynchScope());
+                          getSyncScopeID());
   Result->setVolatile(isVolatile());
   Result->setWeak(isWeak());
   return Result;
@@ -3816,14 +3816,14 @@ AtomicCmpXchgInst *AtomicCmpXchgInst::cl
 
 AtomicRMWInst *AtomicRMWInst::cloneImpl() const {
   AtomicRMWInst *Result =
-    new AtomicRMWInst(getOperation(),getOperand(0), getOperand(1),
-                      getOrdering(), getSynchScope());
+    new AtomicRMWInst(getOperation(), getOperand(0), getOperand(1),
+                      getOrdering(), getSyncScopeID());
   Result->setVolatile(isVolatile());
   return Result;
 }
 
 FenceInst *FenceInst::cloneImpl() const {
-  return new FenceInst(getContext(), getOrdering(), getSynchScope());
+  return new FenceInst(getContext(), getOrdering(), getSyncScopeID());
 }
 
 TruncInst *TruncInst::cloneImpl() const {

Modified: llvm/trunk/lib/IR/LLVMContext.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/IR/LLVMContext.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/IR/LLVMContext.cpp (original)
+++ llvm/trunk/lib/IR/LLVMContext.cpp Tue Jul 11 15:23:00 2017
@@ -81,6 +81,16 @@ LLVMContext::LLVMContext() : pImpl(new L
   assert(GCTransitionEntry->second == LLVMContext::OB_gc_transition &&
          "gc-transition operand bundle id drifted!");
   (void)GCTransitionEntry;
+
+  SyncScope::ID SingleThreadSSID =
+      pImpl->getOrInsertSyncScopeID("singlethread");
+  assert(SingleThreadSSID == SyncScope::SingleThread &&
+         "singlethread synchronization scope ID drifted!");
+
+  SyncScope::ID SystemSSID =
+      pImpl->getOrInsertSyncScopeID("");
+  assert(SystemSSID == SyncScope::System &&
+         "system synchronization scope ID drifted!");
 }
 
 LLVMContext::~LLVMContext() { delete pImpl; }
@@ -255,6 +265,14 @@ uint32_t LLVMContext::getOperandBundleTa
   return pImpl->getOperandBundleTagID(Tag);
 }
 
+SyncScope::ID LLVMContext::getOrInsertSyncScopeID(StringRef SSN) {
+  return pImpl->getOrInsertSyncScopeID(SSN);
+}
+
+void LLVMContext::getSyncScopeNames(SmallVectorImpl<StringRef> &SSNs) const {
+  pImpl->getSyncScopeNames(SSNs);
+}
+
 void LLVMContext::setGC(const Function &Fn, std::string GCName) {
   auto It = pImpl->GCNames.find(&Fn);
 

Modified: llvm/trunk/lib/IR/LLVMContextImpl.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/IR/LLVMContextImpl.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/IR/LLVMContextImpl.cpp (original)
+++ llvm/trunk/lib/IR/LLVMContextImpl.cpp Tue Jul 11 15:23:00 2017
@@ -205,6 +205,20 @@ uint32_t LLVMContextImpl::getOperandBund
   return I->second;
 }
 
+SyncScope::ID LLVMContextImpl::getOrInsertSyncScopeID(StringRef SSN) {
+  auto NewSSID = SSC.size();
+  assert(NewSSID < std::numeric_limits<SyncScope::ID>::max() &&
+         "Hit the maximum number of synchronization scopes allowed!");
+  return SSC.insert(std::make_pair(SSN, SyncScope::ID(NewSSID))).first->second;
+}
+
+void LLVMContextImpl::getSyncScopeNames(
+    SmallVectorImpl<StringRef> &SSNs) const {
+  SSNs.resize(SSC.size());
+  for (const auto &SSE : SSC)
+    SSNs[SSE.second] = SSE.first();
+}
+
 /// Singleton instance of the OptBisect class.
 ///
 /// This singleton is accessed via the LLVMContext::getOptBisect() function.  It

Modified: llvm/trunk/lib/IR/LLVMContextImpl.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/IR/LLVMContextImpl.h?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/IR/LLVMContextImpl.h (original)
+++ llvm/trunk/lib/IR/LLVMContextImpl.h Tue Jul 11 15:23:00 2017
@@ -1297,6 +1297,20 @@ public:
   void getOperandBundleTags(SmallVectorImpl<StringRef> &Tags) const;
   uint32_t getOperandBundleTagID(StringRef Tag) const;
 
+  /// A set of interned synchronization scopes.  The StringMap maps
+  /// synchronization scope names to their respective synchronization scope IDs.
+  StringMap<SyncScope::ID> SSC;
+
+  /// getOrInsertSyncScopeID - Maps synchronization scope name to
+  /// synchronization scope ID.  Every synchronization scope registered with
+  /// LLVMContext has unique ID except pre-defined ones.
+  SyncScope::ID getOrInsertSyncScopeID(StringRef SSN);
+
+  /// getSyncScopeNames - Populates client supplied SmallVector with
+  /// synchronization scope names registered with LLVMContext.  Synchronization
+  /// scope names are ordered by increasing synchronization scope IDs.
+  void getSyncScopeNames(SmallVectorImpl<StringRef> &SSNs) const;
+
   /// Maintain the GC name for each function.
   ///
   /// This saves allocating an additional word in Function for programs which

Modified: llvm/trunk/lib/IR/Verifier.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/IR/Verifier.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/IR/Verifier.cpp (original)
+++ llvm/trunk/lib/IR/Verifier.cpp Tue Jul 11 15:23:00 2017
@@ -3108,7 +3108,7 @@ void Verifier::visitLoadInst(LoadInst &L
            ElTy, &LI);
     checkAtomicMemAccessSize(ElTy, &LI);
   } else {
-    Assert(LI.getSynchScope() == CrossThread,
+    Assert(LI.getSyncScopeID() == SyncScope::System,
            "Non-atomic load cannot have SynchronizationScope specified", &LI);
   }
 
@@ -3137,7 +3137,7 @@ void Verifier::visitStoreInst(StoreInst
            ElTy, &SI);
     checkAtomicMemAccessSize(ElTy, &SI);
   } else {
-    Assert(SI.getSynchScope() == CrossThread,
+    Assert(SI.getSyncScopeID() == SyncScope::System,
            "Non-atomic store cannot have SynchronizationScope specified", &SI);
   }
   visitInstruction(SI);

Modified: llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp (original)
+++ llvm/trunk/lib/Target/ARM/ARMISelLowering.cpp Tue Jul 11 15:23:00 2017
@@ -3398,9 +3398,9 @@ ARMTargetLowering::LowerINTRINSIC_WO_CHA
 static SDValue LowerATOMIC_FENCE(SDValue Op, SelectionDAG &DAG,
                                  const ARMSubtarget *Subtarget) {
   SDLoc dl(Op);
-  ConstantSDNode *ScopeN = cast<ConstantSDNode>(Op.getOperand(2));
-  auto Scope = static_cast<SynchronizationScope>(ScopeN->getZExtValue());
-  if (Scope == SynchronizationScope::SingleThread)
+  ConstantSDNode *SSIDNode = cast<ConstantSDNode>(Op.getOperand(2));
+  auto SSID = static_cast<SyncScope::ID>(SSIDNode->getZExtValue());
+  if (SSID == SyncScope::SingleThread)
     return Op;
 
   if (!Subtarget->hasDataBarrier()) {

Modified: llvm/trunk/lib/Target/SystemZ/SystemZISelLowering.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/SystemZ/SystemZISelLowering.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/Target/SystemZ/SystemZISelLowering.cpp (original)
+++ llvm/trunk/lib/Target/SystemZ/SystemZISelLowering.cpp Tue Jul 11 15:23:00 2017
@@ -3182,13 +3182,13 @@ SDValue SystemZTargetLowering::lowerATOM
   SDLoc DL(Op);
   AtomicOrdering FenceOrdering = static_cast<AtomicOrdering>(
     cast<ConstantSDNode>(Op.getOperand(1))->getZExtValue());
-  SynchronizationScope FenceScope = static_cast<SynchronizationScope>(
+  SyncScope::ID FenceSSID = static_cast<SyncScope::ID>(
     cast<ConstantSDNode>(Op.getOperand(2))->getZExtValue());
 
   // The only fence that needs an instruction is a sequentially-consistent
   // cross-thread fence.
   if (FenceOrdering == AtomicOrdering::SequentiallyConsistent &&
-      FenceScope == CrossThread) {
+      FenceSSID == SyncScope::System) {
     return SDValue(DAG.getMachineNode(SystemZ::Serialize, DL, MVT::Other,
                                       Op.getOperand(0)),
                    0);

Modified: llvm/trunk/lib/Target/X86/X86ISelLowering.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Target/X86/X86ISelLowering.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/Target/X86/X86ISelLowering.cpp (original)
+++ llvm/trunk/lib/Target/X86/X86ISelLowering.cpp Tue Jul 11 15:23:00 2017
@@ -22850,7 +22850,7 @@ X86TargetLowering::lowerIdempotentRMWInt
 
   auto Builder = IRBuilder<>(AI);
   Module *M = Builder.GetInsertBlock()->getParent()->getParent();
-  auto SynchScope = AI->getSynchScope();
+  auto SSID = AI->getSyncScopeID();
   // We must restrict the ordering to avoid generating loads with Release or
   // ReleaseAcquire orderings.
   auto Order = AtomicCmpXchgInst::getStrongestFailureOrdering(AI->getOrdering());
@@ -22872,7 +22872,7 @@ X86TargetLowering::lowerIdempotentRMWInt
   // otherwise, we might be able to be more aggressive on relaxed idempotent
   // rmw. In practice, they do not look useful, so we don't try to be
   // especially clever.
-  if (SynchScope == SingleThread)
+  if (SSID == SyncScope::SingleThread)
     // FIXME: we could just insert an X86ISD::MEMBARRIER here, except we are at
     // the IR level, so we must wrap it in an intrinsic.
     return nullptr;
@@ -22891,7 +22891,7 @@ X86TargetLowering::lowerIdempotentRMWInt
   // Finally we can emit the atomic load.
   LoadInst *Loaded = Builder.CreateAlignedLoad(Ptr,
           AI->getType()->getPrimitiveSizeInBits());
-  Loaded->setAtomic(Order, SynchScope);
+  Loaded->setAtomic(Order, SSID);
   AI->replaceAllUsesWith(Loaded);
   AI->eraseFromParent();
   return Loaded;
@@ -22902,13 +22902,13 @@ static SDValue LowerATOMIC_FENCE(SDValue
   SDLoc dl(Op);
   AtomicOrdering FenceOrdering = static_cast<AtomicOrdering>(
     cast<ConstantSDNode>(Op.getOperand(1))->getZExtValue());
-  SynchronizationScope FenceScope = static_cast<SynchronizationScope>(
+  SyncScope::ID FenceSSID = static_cast<SyncScope::ID>(
     cast<ConstantSDNode>(Op.getOperand(2))->getZExtValue());
 
   // The only fence that needs an instruction is a sequentially-consistent
   // cross-thread fence.
   if (FenceOrdering == AtomicOrdering::SequentiallyConsistent &&
-      FenceScope == CrossThread) {
+      FenceSSID == SyncScope::System) {
     if (Subtarget.hasMFence())
       return DAG.getNode(X86ISD::MFENCE, dl, MVT::Other, Op.getOperand(0));
 

Modified: llvm/trunk/lib/Transforms/IPO/GlobalOpt.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/IPO/GlobalOpt.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/Transforms/IPO/GlobalOpt.cpp (original)
+++ llvm/trunk/lib/Transforms/IPO/GlobalOpt.cpp Tue Jul 11 15:23:00 2017
@@ -837,7 +837,7 @@ OptimizeGlobalAddressOfMalloc(GlobalVari
     if (StoreInst *SI = dyn_cast<StoreInst>(GV->user_back())) {
       // The global is initialized when the store to it occurs.
       new StoreInst(ConstantInt::getTrue(GV->getContext()), InitBool, false, 0,
-                    SI->getOrdering(), SI->getSynchScope(), SI);
+                    SI->getOrdering(), SI->getSyncScopeID(), SI);
       SI->eraseFromParent();
       continue;
     }
@@ -854,7 +854,7 @@ OptimizeGlobalAddressOfMalloc(GlobalVari
       // Replace the cmp X, 0 with a use of the bool value.
       // Sink the load to where the compare was, if atomic rules allow us to.
       Value *LV = new LoadInst(InitBool, InitBool->getName()+".val", false, 0,
-                               LI->getOrdering(), LI->getSynchScope(),
+                               LI->getOrdering(), LI->getSyncScopeID(),
                                LI->isUnordered() ? (Instruction*)ICI : LI);
       InitBoolUsed = true;
       switch (ICI->getPredicate()) {
@@ -1605,7 +1605,7 @@ static bool TryToShrinkGlobalToBoolean(G
           assert(LI->getOperand(0) == GV && "Not a copy!");
           // Insert a new load, to preserve the saved value.
           StoreVal = new LoadInst(NewGV, LI->getName()+".b", false, 0,
-                                  LI->getOrdering(), LI->getSynchScope(), LI);
+                                  LI->getOrdering(), LI->getSyncScopeID(), LI);
         } else {
           assert((isa<CastInst>(StoredVal) || isa<SelectInst>(StoredVal)) &&
                  "This is not a form that we understand!");
@@ -1614,12 +1614,12 @@ static bool TryToShrinkGlobalToBoolean(G
         }
       }
       new StoreInst(StoreVal, NewGV, false, 0,
-                    SI->getOrdering(), SI->getSynchScope(), SI);
+                    SI->getOrdering(), SI->getSyncScopeID(), SI);
     } else {
       // Change the load into a load of bool then a select.
       LoadInst *LI = cast<LoadInst>(UI);
       LoadInst *NLI = new LoadInst(NewGV, LI->getName()+".b", false, 0,
-                                   LI->getOrdering(), LI->getSynchScope(), LI);
+                                   LI->getOrdering(), LI->getSyncScopeID(), LI);
       Value *NSI;
       if (IsOneZero)
         NSI = new ZExtInst(NLI, LI->getType(), "", LI);

Modified: llvm/trunk/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp (original)
+++ llvm/trunk/lib/Transforms/InstCombine/InstCombineLoadStoreAlloca.cpp Tue Jul 11 15:23:00 2017
@@ -461,7 +461,7 @@ static LoadInst *combineLoadToNewType(In
   LoadInst *NewLoad = IC.Builder.CreateAlignedLoad(
       IC.Builder.CreateBitCast(Ptr, NewTy->getPointerTo(AS)),
       LI.getAlignment(), LI.isVolatile(), LI.getName() + Suffix);
-  NewLoad->setAtomic(LI.getOrdering(), LI.getSynchScope());
+  NewLoad->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
   MDBuilder MDB(NewLoad->getContext());
   for (const auto &MDPair : MD) {
     unsigned ID = MDPair.first;
@@ -521,7 +521,7 @@ static StoreInst *combineStoreToNewValue
   StoreInst *NewStore = IC.Builder.CreateAlignedStore(
       V, IC.Builder.CreateBitCast(Ptr, V->getType()->getPointerTo(AS)),
       SI.getAlignment(), SI.isVolatile());
-  NewStore->setAtomic(SI.getOrdering(), SI.getSynchScope());
+  NewStore->setAtomic(SI.getOrdering(), SI.getSyncScopeID());
   for (const auto &MDPair : MD) {
     unsigned ID = MDPair.first;
     MDNode *N = MDPair.second;
@@ -1025,9 +1025,9 @@ Instruction *InstCombiner::visitLoadInst
                                           SI->getOperand(2)->getName()+".val");
         assert(LI.isUnordered() && "implied by above");
         V1->setAlignment(Align);
-        V1->setAtomic(LI.getOrdering(), LI.getSynchScope());
+        V1->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
         V2->setAlignment(Align);
-        V2->setAtomic(LI.getOrdering(), LI.getSynchScope());
+        V2->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
         return SelectInst::Create(SI->getCondition(), V1, V2);
       }
 
@@ -1540,7 +1540,7 @@ bool InstCombiner::SimplifyStoreAtEndOfB
                                    SI.isVolatile(),
                                    SI.getAlignment(),
                                    SI.getOrdering(),
-                                   SI.getSynchScope());
+                                   SI.getSyncScopeID());
   InsertNewInstBefore(NewSI, *BBI);
   // The debug locations of the original instructions might differ; merge them.
   NewSI->setDebugLoc(DILocation::getMergedLocation(SI.getDebugLoc(),

Modified: llvm/trunk/lib/Transforms/Instrumentation/ThreadSanitizer.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Instrumentation/ThreadSanitizer.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/Transforms/Instrumentation/ThreadSanitizer.cpp (original)
+++ llvm/trunk/lib/Transforms/Instrumentation/ThreadSanitizer.cpp Tue Jul 11 15:23:00 2017
@@ -379,10 +379,11 @@ void ThreadSanitizer::chooseInstructions
 }
 
 static bool isAtomic(Instruction *I) {
+  // TODO: Ask TTI whether synchronization scope is between threads.
   if (LoadInst *LI = dyn_cast<LoadInst>(I))
-    return LI->isAtomic() && LI->getSynchScope() == CrossThread;
+    return LI->isAtomic() && LI->getSyncScopeID() != SyncScope::SingleThread;
   if (StoreInst *SI = dyn_cast<StoreInst>(I))
-    return SI->isAtomic() && SI->getSynchScope() == CrossThread;
+    return SI->isAtomic() && SI->getSyncScopeID() != SyncScope::SingleThread;
   if (isa<AtomicRMWInst>(I))
     return true;
   if (isa<AtomicCmpXchgInst>(I))
@@ -676,7 +677,7 @@ bool ThreadSanitizer::instrumentAtomic(I
     I->eraseFromParent();
   } else if (FenceInst *FI = dyn_cast<FenceInst>(I)) {
     Value *Args[] = {createOrdering(&IRB, FI->getOrdering())};
-    Function *F = FI->getSynchScope() == SingleThread ?
+    Function *F = FI->getSyncScopeID() == SyncScope::SingleThread ?
         TsanAtomicSignalFence : TsanAtomicThreadFence;
     CallInst *C = CallInst::Create(F, Args);
     ReplaceInstWithInst(I, C);

Modified: llvm/trunk/lib/Transforms/Scalar/GVN.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Scalar/GVN.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/Transforms/Scalar/GVN.cpp (original)
+++ llvm/trunk/lib/Transforms/Scalar/GVN.cpp Tue Jul 11 15:23:00 2017
@@ -1166,7 +1166,7 @@ bool GVN::PerformLoadPRE(LoadInst *LI, A
 
     auto *NewLoad = new LoadInst(LoadPtr, LI->getName()+".pre",
                                  LI->isVolatile(), LI->getAlignment(),
-                                 LI->getOrdering(), LI->getSynchScope(),
+                                 LI->getOrdering(), LI->getSyncScopeID(),
                                  UnavailablePred->getTerminator());
 
     // Transfer the old load's AA tags to the new load.

Modified: llvm/trunk/lib/Transforms/Scalar/JumpThreading.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Scalar/JumpThreading.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/Transforms/Scalar/JumpThreading.cpp (original)
+++ llvm/trunk/lib/Transforms/Scalar/JumpThreading.cpp Tue Jul 11 15:23:00 2017
@@ -1212,7 +1212,7 @@ bool JumpThreadingPass::SimplifyPartiall
     LoadInst *NewVal = new LoadInst(
         LoadedPtr->DoPHITranslation(LoadBB, UnavailablePred),
         LI->getName() + ".pr", false, LI->getAlignment(), LI->getOrdering(),
-        LI->getSynchScope(), UnavailablePred->getTerminator());
+        LI->getSyncScopeID(), UnavailablePred->getTerminator());
     NewVal->setDebugLoc(LI->getDebugLoc());
     if (AATags)
       NewVal->setAAMetadata(AATags);

Modified: llvm/trunk/lib/Transforms/Scalar/SROA.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Scalar/SROA.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/Transforms/Scalar/SROA.cpp (original)
+++ llvm/trunk/lib/Transforms/Scalar/SROA.cpp Tue Jul 11 15:23:00 2017
@@ -2398,7 +2398,7 @@ private:
       LoadInst *NewLI = IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(),
                                               LI.isVolatile(), LI.getName());
       if (LI.isVolatile())
-        NewLI->setAtomic(LI.getOrdering(), LI.getSynchScope());
+        NewLI->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
 
       // Any !nonnull metadata or !range metadata on the old load is also valid
       // on the new load. This is even true in some cases even when the loads
@@ -2433,7 +2433,7 @@ private:
                                               getSliceAlign(TargetTy),
                                               LI.isVolatile(), LI.getName());
       if (LI.isVolatile())
-        NewLI->setAtomic(LI.getOrdering(), LI.getSynchScope());
+        NewLI->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
 
       V = NewLI;
       IsPtrAdjusted = true;
@@ -2576,7 +2576,7 @@ private:
     }
     NewSI->copyMetadata(SI, LLVMContext::MD_mem_parallel_loop_access);
     if (SI.isVolatile())
-      NewSI->setAtomic(SI.getOrdering(), SI.getSynchScope());
+      NewSI->setAtomic(SI.getOrdering(), SI.getSyncScopeID());
     Pass.DeadInsts.insert(&SI);
     deleteIfTriviallyDead(OldOp);
 

Modified: llvm/trunk/lib/Transforms/Utils/FunctionComparator.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Transforms/Utils/FunctionComparator.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/lib/Transforms/Utils/FunctionComparator.cpp (original)
+++ llvm/trunk/lib/Transforms/Utils/FunctionComparator.cpp Tue Jul 11 15:23:00 2017
@@ -513,8 +513,8 @@ int FunctionComparator::cmpOperations(co
     if (int Res =
             cmpOrderings(LI->getOrdering(), cast<LoadInst>(R)->getOrdering()))
       return Res;
-    if (int Res =
-            cmpNumbers(LI->getSynchScope(), cast<LoadInst>(R)->getSynchScope()))
+    if (int Res = cmpNumbers(LI->getSyncScopeID(),
+                             cast<LoadInst>(R)->getSyncScopeID()))
       return Res;
     return cmpRangeMetadata(LI->getMetadata(LLVMContext::MD_range),
         cast<LoadInst>(R)->getMetadata(LLVMContext::MD_range));
@@ -529,7 +529,8 @@ int FunctionComparator::cmpOperations(co
     if (int Res =
             cmpOrderings(SI->getOrdering(), cast<StoreInst>(R)->getOrdering()))
       return Res;
-    return cmpNumbers(SI->getSynchScope(), cast<StoreInst>(R)->getSynchScope());
+    return cmpNumbers(SI->getSyncScopeID(),
+                      cast<StoreInst>(R)->getSyncScopeID());
   }
   if (const CmpInst *CI = dyn_cast<CmpInst>(L))
     return cmpNumbers(CI->getPredicate(), cast<CmpInst>(R)->getPredicate());
@@ -584,7 +585,8 @@ int FunctionComparator::cmpOperations(co
     if (int Res =
             cmpOrderings(FI->getOrdering(), cast<FenceInst>(R)->getOrdering()))
       return Res;
-    return cmpNumbers(FI->getSynchScope(), cast<FenceInst>(R)->getSynchScope());
+    return cmpNumbers(FI->getSyncScopeID(),
+                      cast<FenceInst>(R)->getSyncScopeID());
   }
   if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(L)) {
     if (int Res = cmpNumbers(CXI->isVolatile(),
@@ -601,8 +603,8 @@ int FunctionComparator::cmpOperations(co
             cmpOrderings(CXI->getFailureOrdering(),
                          cast<AtomicCmpXchgInst>(R)->getFailureOrdering()))
       return Res;
-    return cmpNumbers(CXI->getSynchScope(),
-                      cast<AtomicCmpXchgInst>(R)->getSynchScope());
+    return cmpNumbers(CXI->getSyncScopeID(),
+                      cast<AtomicCmpXchgInst>(R)->getSyncScopeID());
   }
   if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(L)) {
     if (int Res = cmpNumbers(RMWI->getOperation(),
@@ -614,8 +616,8 @@ int FunctionComparator::cmpOperations(co
     if (int Res = cmpOrderings(RMWI->getOrdering(),
                              cast<AtomicRMWInst>(R)->getOrdering()))
       return Res;
-    return cmpNumbers(RMWI->getSynchScope(),
-                      cast<AtomicRMWInst>(R)->getSynchScope());
+    return cmpNumbers(RMWI->getSyncScopeID(),
+                      cast<AtomicRMWInst>(R)->getSyncScopeID());
   }
   if (const PHINode *PNL = dyn_cast<PHINode>(L)) {
     const PHINode *PNR = cast<PHINode>(R);

Modified: llvm/trunk/test/Assembler/atomic.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Assembler/atomic.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/Assembler/atomic.ll (original)
+++ llvm/trunk/test/Assembler/atomic.ll Tue Jul 11 15:23:00 2017
@@ -5,14 +5,20 @@
 define void @f(i32* %x) {
   ; CHECK: load atomic i32, i32* %x unordered, align 4
   load atomic i32, i32* %x unordered, align 4
-  ; CHECK: load atomic volatile i32, i32* %x singlethread acquire, align 4
-  load atomic volatile i32, i32* %x singlethread acquire, align 4
+  ; CHECK: load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
+  load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
+  ; CHECK: load atomic volatile i32, i32* %x syncscope("agent") acquire, align 4
+  load atomic volatile i32, i32* %x syncscope("agent") acquire, align 4
   ; CHECK: store atomic i32 3, i32* %x release, align 4
   store atomic i32 3, i32* %x release, align 4
-  ; CHECK: store atomic volatile i32 3, i32* %x singlethread monotonic, align 4
-  store atomic volatile i32 3, i32* %x singlethread monotonic, align 4
-  ; CHECK: cmpxchg i32* %x, i32 1, i32 0 singlethread monotonic monotonic
-  cmpxchg i32* %x, i32 1, i32 0 singlethread monotonic monotonic
+  ; CHECK: store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
+  store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 3, i32* %x syncscope("workgroup") monotonic, align 4
+  store atomic volatile i32 3, i32* %x syncscope("workgroup") monotonic, align 4
+  ; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
+  cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
+  ; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("workitem") monotonic monotonic
+  cmpxchg i32* %x, i32 1, i32 0 syncscope("workitem") monotonic monotonic
   ; CHECK: cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
   cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
   ; CHECK: cmpxchg i32* %x, i32 42, i32 0 acq_rel monotonic
@@ -23,9 +29,13 @@ define void @f(i32* %x) {
   atomicrmw add i32* %x, i32 10 seq_cst
   ; CHECK: atomicrmw volatile xchg  i32* %x, i32 10 monotonic
   atomicrmw volatile xchg i32* %x, i32 10 monotonic
-  ; CHECK: fence singlethread release
-  fence singlethread release
+  ; CHECK: atomicrmw volatile xchg  i32* %x, i32 10 syncscope("agent") monotonic
+  atomicrmw volatile xchg i32* %x, i32 10 syncscope("agent") monotonic
+  ; CHECK: fence syncscope("singlethread") release
+  fence syncscope("singlethread") release
   ; CHECK: fence seq_cst
   fence seq_cst
+  ; CHECK: fence syncscope("device") seq_cst
+  fence syncscope("device") seq_cst
   ret void
 }

Added: llvm/trunk/test/Bitcode/atomic-no-syncscope.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/atomic-no-syncscope.ll?rev=307722&view=auto
==============================================================================
--- llvm/trunk/test/Bitcode/atomic-no-syncscope.ll (added)
+++ llvm/trunk/test/Bitcode/atomic-no-syncscope.ll Tue Jul 11 15:23:00 2017
@@ -0,0 +1,17 @@
+; RUN: llvm-dis -o - %s.bc | FileCheck %s
+
+; Backwards compatibility test: make sure we can process bitcode without
+; synchronization scope names encoded in it.
+
+; CHECK: load atomic i32, i32* %x unordered, align 4
+; CHECK: load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
+; CHECK: store atomic i32 3, i32* %x release, align 4
+; CHECK: store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
+; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
+; CHECK: cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
+; CHECK: cmpxchg i32* %x, i32 42, i32 0 acq_rel monotonic
+; CHECK: cmpxchg weak i32* %x, i32 13, i32 0 seq_cst monotonic
+; CHECK: atomicrmw add i32* %x, i32 10 seq_cst
+; CHECK: atomicrmw volatile xchg  i32* %x, i32 10 monotonic
+; CHECK: fence syncscope("singlethread") release
+; CHECK: fence seq_cst

Added: llvm/trunk/test/Bitcode/atomic-no-syncscope.ll.bc
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/atomic-no-syncscope.ll.bc?rev=307722&view=auto
==============================================================================
Binary file - no diff available.

Propchange: llvm/trunk/test/Bitcode/atomic-no-syncscope.ll.bc
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Modified: llvm/trunk/test/Bitcode/atomic.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/atomic.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/Bitcode/atomic.ll (original)
+++ llvm/trunk/test/Bitcode/atomic.ll Tue Jul 11 15:23:00 2017
@@ -11,8 +11,8 @@ define void @test_cmpxchg(i32* %addr, i3
   cmpxchg weak i32* %addr, i32 %desired, i32 %new acq_rel acquire
   ; CHECK: cmpxchg weak i32* %addr, i32 %desired, i32 %new acq_rel acquire
 
-  cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new singlethread release monotonic
-  ; CHECK: cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new singlethread release monotonic
+  cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new syncscope("singlethread") release monotonic
+  ; CHECK: cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new syncscope("singlethread") release monotonic
 
   ret void
 }

Modified: llvm/trunk/test/Bitcode/compatibility-3.6.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/compatibility-3.6.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/Bitcode/compatibility-3.6.ll (original)
+++ llvm/trunk/test/Bitcode/compatibility-3.6.ll Tue Jul 11 15:23:00 2017
@@ -551,8 +551,8 @@ define void @atomics(i32* %word) {
   ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
   %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
   ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
-  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
-  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
+  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
+  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
   %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@@ -571,33 +571,33 @@ define void @atomics(i32* %word) {
   ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
   %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
   ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
-  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
-  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
+  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
   fence acquire
   ; CHECK: fence acquire
   fence release
   ; CHECK: fence release
   fence acq_rel
   ; CHECK: fence acq_rel
-  fence singlethread seq_cst
-  ; CHECK: fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
+  ; CHECK: fence syncscope("singlethread") seq_cst
 
   ; XXX: The parser spits out the load type here.
   %ld.1 = load atomic i32* %word monotonic, align 4
   ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
   %ld.2 = load atomic volatile i32* %word acquire, align 8
   ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
-  %ld.3 = load atomic volatile i32* %word singlethread seq_cst, align 16
-  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
+  %ld.3 = load atomic volatile i32* %word syncscope("singlethread") seq_cst, align 16
+  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
 
   store atomic i32 23, i32* %word monotonic, align 4
   ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
   store atomic volatile i32 24, i32* %word monotonic, align 4
   ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
-  store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
-  ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
+  store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
   ret void
 }
 

Modified: llvm/trunk/test/Bitcode/compatibility-3.7.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/compatibility-3.7.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/Bitcode/compatibility-3.7.ll (original)
+++ llvm/trunk/test/Bitcode/compatibility-3.7.ll Tue Jul 11 15:23:00 2017
@@ -596,8 +596,8 @@ define void @atomics(i32* %word) {
   ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
   %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
   ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
-  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
-  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
+  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
+  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
   %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@@ -616,32 +616,32 @@ define void @atomics(i32* %word) {
   ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
   %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
   ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
-  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
-  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
+  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
   fence acquire
   ; CHECK: fence acquire
   fence release
   ; CHECK: fence release
   fence acq_rel
   ; CHECK: fence acq_rel
-  fence singlethread seq_cst
-  ; CHECK: fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
+  ; CHECK: fence syncscope("singlethread") seq_cst
 
   %ld.1 = load atomic i32, i32* %word monotonic, align 4
   ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
   %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
   ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
-  %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
-  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
+  %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
+  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
 
   store atomic i32 23, i32* %word monotonic, align 4
   ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
   store atomic volatile i32 24, i32* %word monotonic, align 4
   ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
-  store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
-  ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
+  store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
   ret void
 }
 

Modified: llvm/trunk/test/Bitcode/compatibility-3.8.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/compatibility-3.8.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/Bitcode/compatibility-3.8.ll (original)
+++ llvm/trunk/test/Bitcode/compatibility-3.8.ll Tue Jul 11 15:23:00 2017
@@ -627,8 +627,8 @@ define void @atomics(i32* %word) {
   ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
   %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
   ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
-  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
-  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
+  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
+  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
   %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@@ -647,32 +647,32 @@ define void @atomics(i32* %word) {
   ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
   %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
   ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
-  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
-  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
+  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
   fence acquire
   ; CHECK: fence acquire
   fence release
   ; CHECK: fence release
   fence acq_rel
   ; CHECK: fence acq_rel
-  fence singlethread seq_cst
-  ; CHECK: fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
+  ; CHECK: fence syncscope("singlethread") seq_cst
 
   %ld.1 = load atomic i32, i32* %word monotonic, align 4
   ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
   %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
   ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
-  %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
-  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
+  %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
+  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
 
   store atomic i32 23, i32* %word monotonic, align 4
   ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
   store atomic volatile i32 24, i32* %word monotonic, align 4
   ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
-  store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
-  ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
+  store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
   ret void
 }
 

Modified: llvm/trunk/test/Bitcode/compatibility-3.9.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/compatibility-3.9.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/Bitcode/compatibility-3.9.ll (original)
+++ llvm/trunk/test/Bitcode/compatibility-3.9.ll Tue Jul 11 15:23:00 2017
@@ -698,8 +698,8 @@ define void @atomics(i32* %word) {
   ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
   %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
   ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
-  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
-  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
+  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
+  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
   %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@@ -718,32 +718,32 @@ define void @atomics(i32* %word) {
   ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
   %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
   ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
-  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
-  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
+  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
   fence acquire
   ; CHECK: fence acquire
   fence release
   ; CHECK: fence release
   fence acq_rel
   ; CHECK: fence acq_rel
-  fence singlethread seq_cst
-  ; CHECK: fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
+  ; CHECK: fence syncscope("singlethread") seq_cst
 
   %ld.1 = load atomic i32, i32* %word monotonic, align 4
   ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
   %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
   ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
-  %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
-  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
+  %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
+  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
 
   store atomic i32 23, i32* %word monotonic, align 4
   ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
   store atomic volatile i32 24, i32* %word monotonic, align 4
   ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
-  store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
-  ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
+  store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
   ret void
 }
 

Modified: llvm/trunk/test/Bitcode/compatibility-4.0.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/compatibility-4.0.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/Bitcode/compatibility-4.0.ll (original)
+++ llvm/trunk/test/Bitcode/compatibility-4.0.ll Tue Jul 11 15:23:00 2017
@@ -698,8 +698,8 @@ define void @atomics(i32* %word) {
   ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
   %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
   ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
-  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
-  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
+  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
+  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
   %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@@ -718,32 +718,32 @@ define void @atomics(i32* %word) {
   ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
   %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
   ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
-  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
-  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
+  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
   fence acquire
   ; CHECK: fence acquire
   fence release
   ; CHECK: fence release
   fence acq_rel
   ; CHECK: fence acq_rel
-  fence singlethread seq_cst
-  ; CHECK: fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
+  ; CHECK: fence syncscope("singlethread") seq_cst
 
   %ld.1 = load atomic i32, i32* %word monotonic, align 4
   ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
   %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
   ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
-  %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
-  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
+  %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
+  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
 
   store atomic i32 23, i32* %word monotonic, align 4
   ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
   store atomic volatile i32 24, i32* %word monotonic, align 4
   ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
-  store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
-  ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
+  store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
   ret void
 }
 

Modified: llvm/trunk/test/Bitcode/compatibility.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/compatibility.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/Bitcode/compatibility.ll (original)
+++ llvm/trunk/test/Bitcode/compatibility.ll Tue Jul 11 15:23:00 2017
@@ -705,8 +705,8 @@ define void @atomics(i32* %word) {
   ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
   %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
   ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
-  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
-  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic
+  %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
+  ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
   %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
   %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@@ -725,32 +725,32 @@ define void @atomics(i32* %word) {
   ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
   %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
   ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
-  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic
-  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
-  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic
+  %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
+  %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
+  ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
   fence acquire
   ; CHECK: fence acquire
   fence release
   ; CHECK: fence release
   fence acq_rel
   ; CHECK: fence acq_rel
-  fence singlethread seq_cst
-  ; CHECK: fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
+  ; CHECK: fence syncscope("singlethread") seq_cst
 
   %ld.1 = load atomic i32, i32* %word monotonic, align 4
   ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
   %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
   ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
-  %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
-  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16
+  %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
+  ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
 
   store atomic i32 23, i32* %word monotonic, align 4
   ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
   store atomic volatile i32 24, i32* %word monotonic, align 4
   ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
-  store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
-  ; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4
+  store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
+  ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
   ret void
 }
 

Modified: llvm/trunk/test/Bitcode/memInstructions.3.2.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/memInstructions.3.2.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/Bitcode/memInstructions.3.2.ll (original)
+++ llvm/trunk/test/Bitcode/memInstructions.3.2.ll Tue Jul 11 15:23:00 2017
@@ -107,29 +107,29 @@ entry:
 ; CHECK-NEXT: %res8 = load atomic volatile i8, i8* %ptr1 seq_cst, align 1
   %res8 = load atomic volatile i8, i8* %ptr1 seq_cst, align 1
 
-; CHECK-NEXT: %res9 = load atomic i8, i8* %ptr1 singlethread unordered, align 1
-  %res9 = load atomic i8, i8* %ptr1 singlethread unordered, align 1
+; CHECK-NEXT: %res9 = load atomic i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
+  %res9 = load atomic i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
 
-; CHECK-NEXT: %res10 = load atomic i8, i8* %ptr1 singlethread monotonic, align 1
-  %res10 = load atomic i8, i8* %ptr1 singlethread monotonic, align 1
+; CHECK-NEXT: %res10 = load atomic i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
+  %res10 = load atomic i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
 
-; CHECK-NEXT: %res11 = load atomic i8, i8* %ptr1 singlethread acquire, align 1
-  %res11 = load atomic i8, i8* %ptr1 singlethread acquire, align 1
+; CHECK-NEXT: %res11 = load atomic i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
+  %res11 = load atomic i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
 
-; CHECK-NEXT: %res12 = load atomic i8, i8* %ptr1 singlethread seq_cst, align 1
-  %res12 = load atomic i8, i8* %ptr1 singlethread seq_cst, align 1
+; CHECK-NEXT: %res12 = load atomic i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
+  %res12 = load atomic i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
 
-; CHECK-NEXT: %res13 = load atomic volatile i8, i8* %ptr1 singlethread unordered, align 1
-  %res13 = load atomic volatile i8, i8* %ptr1 singlethread unordered, align 1
+; CHECK-NEXT: %res13 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
+  %res13 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
 
-; CHECK-NEXT: %res14 = load atomic volatile i8, i8* %ptr1 singlethread monotonic, align 1
-  %res14 = load atomic volatile i8, i8* %ptr1 singlethread monotonic, align 1
+; CHECK-NEXT: %res14 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
+  %res14 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
 
-; CHECK-NEXT: %res15 = load atomic volatile i8, i8* %ptr1 singlethread acquire, align 1
-  %res15 = load atomic volatile i8, i8* %ptr1 singlethread acquire, align 1
+; CHECK-NEXT: %res15 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
+  %res15 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
 
-; CHECK-NEXT: %res16 = load atomic volatile i8, i8* %ptr1 singlethread seq_cst, align 1
-  %res16 = load atomic volatile i8, i8* %ptr1 singlethread seq_cst, align 1
+; CHECK-NEXT: %res16 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
+  %res16 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
 
   ret void
 }
@@ -193,29 +193,29 @@ entry:
 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 seq_cst, align 1
   store atomic volatile i8 2, i8* %ptr1 seq_cst, align 1
 
-; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread unordered, align 1
-  store atomic i8 2, i8* %ptr1 singlethread unordered, align 1
+; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
+  store atomic i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
 
-; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread monotonic, align 1
-  store atomic i8 2, i8* %ptr1 singlethread monotonic, align 1
+; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
+  store atomic i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
 
-; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread release, align 1
-  store atomic i8 2, i8* %ptr1 singlethread release, align 1
+; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
+  store atomic i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
 
-; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread seq_cst, align 1
-  store atomic i8 2, i8* %ptr1 singlethread seq_cst, align 1
+; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
+  store atomic i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
 
-; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread unordered, align 1
-  store atomic volatile i8 2, i8* %ptr1 singlethread unordered, align 1
+; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
+  store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
 
-; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread monotonic, align 1
-  store atomic volatile i8 2, i8* %ptr1 singlethread monotonic, align 1
+; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
+  store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
 
-; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread release, align 1
-  store atomic volatile i8 2, i8* %ptr1 singlethread release, align 1
+; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
+  store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
 
-; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread seq_cst, align 1
-  store atomic volatile i8 2, i8* %ptr1 singlethread seq_cst, align 1
+; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
+  store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
 
   ret void
 }
@@ -232,13 +232,13 @@ entry:
 ; CHECK-NEXT: %res2 = extractvalue { i32, i1 } [[TMP]], 0
   %res2 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new monotonic monotonic
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
 ; CHECK-NEXT: %res3 = extractvalue { i32, i1 } [[TMP]], 0
-  %res3 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
+  %res3 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
 ; CHECK-NEXT: %res4 = extractvalue { i32, i1 } [[TMP]], 0
-  %res4 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic
+  %res4 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
 
 
 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new acquire acquire
@@ -249,13 +249,13 @@ entry:
 ; CHECK-NEXT: %res6 = extractvalue { i32, i1 } [[TMP]], 0
   %res6 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new acquire acquire
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
 ; CHECK-NEXT: %res7 = extractvalue { i32, i1 } [[TMP]], 0
-  %res7 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
+  %res7 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
 ; CHECK-NEXT: %res8 = extractvalue { i32, i1 } [[TMP]], 0
-  %res8 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire
+  %res8 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
 
 
 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new release monotonic
@@ -266,13 +266,13 @@ entry:
 ; CHECK-NEXT: %res10 = extractvalue { i32, i1 } [[TMP]], 0
   %res10 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new release monotonic
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
 ; CHECK-NEXT: %res11 = extractvalue { i32, i1 } [[TMP]], 0
-  %res11 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
+  %res11 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
 ; CHECK-NEXT: %res12 = extractvalue { i32, i1 } [[TMP]], 0
-  %res12 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic
+  %res12 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
 
 
 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new acq_rel acquire
@@ -283,13 +283,13 @@ entry:
 ; CHECK-NEXT: %res14 = extractvalue { i32, i1 } [[TMP]], 0
   %res14 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new acq_rel acquire
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
 ; CHECK-NEXT: %res15 = extractvalue { i32, i1 } [[TMP]], 0
-  %res15 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
+  %res15 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
 ; CHECK-NEXT: %res16 = extractvalue { i32, i1 } [[TMP]], 0
-  %res16 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire
+  %res16 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
 
 
 ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new seq_cst seq_cst
@@ -300,13 +300,13 @@ entry:
 ; CHECK-NEXT: %res18 = extractvalue { i32, i1 } [[TMP]], 0
   %res18 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new seq_cst seq_cst
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
 ; CHECK-NEXT: %res19 = extractvalue { i32, i1 } [[TMP]], 0
-  %res19 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst
+  %res19 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
 
-; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst
+; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
 ; CHECK-NEXT: %res20 = extractvalue { i32, i1 } [[TMP]], 0
-  %res20 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst
+  %res20 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
 
   ret void
 }

Modified: llvm/trunk/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll Tue Jul 11 15:23:00 2017
@@ -1328,16 +1328,16 @@ define void @test_load_store_atomics(i8*
 ; CHECK: G_STORE [[V0]](s8), [[ADDR]](p0) :: (store monotonic 1 into %ir.addr)
 ; CHECK: [[V1:%[0-9]+]](s8) = G_LOAD [[ADDR]](p0) :: (load acquire 1 from %ir.addr)
 ; CHECK: G_STORE [[V1]](s8), [[ADDR]](p0) :: (store release 1 into %ir.addr)
-; CHECK: [[V2:%[0-9]+]](s8) = G_LOAD [[ADDR]](p0) :: (load singlethread seq_cst 1 from %ir.addr)
-; CHECK: G_STORE [[V2]](s8), [[ADDR]](p0) :: (store singlethread monotonic 1 into %ir.addr)
+; CHECK: [[V2:%[0-9]+]](s8) = G_LOAD [[ADDR]](p0) :: (load syncscope("singlethread") seq_cst 1 from %ir.addr)
+; CHECK: G_STORE [[V2]](s8), [[ADDR]](p0) :: (store syncscope("singlethread") monotonic 1 into %ir.addr)
   %v0 = load atomic i8, i8* %addr unordered, align 1
   store atomic i8 %v0, i8* %addr monotonic, align 1
 
   %v1 = load atomic i8, i8* %addr acquire, align 1
   store atomic i8 %v1, i8* %addr release, align 1
 
-  %v2 = load atomic i8, i8* %addr singlethread seq_cst, align 1
-  store atomic i8 %v2, i8* %addr singlethread monotonic, align 1
+  %v2 = load atomic i8, i8* %addr syncscope("singlethread") seq_cst, align 1
+  store atomic i8 %v2, i8* %addr syncscope("singlethread") monotonic, align 1
 
   ret void
 }

Modified: llvm/trunk/test/CodeGen/AArch64/fence-singlethread.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AArch64/fence-singlethread.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/AArch64/fence-singlethread.ll (original)
+++ llvm/trunk/test/CodeGen/AArch64/fence-singlethread.ll Tue Jul 11 15:23:00 2017
@@ -16,6 +16,6 @@ define void @fence_singlethread() {
 ; IOS: ; COMPILER BARRIER
 ; IOS-NOT: dmb
 
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   ret void
 }

Added: llvm/trunk/test/CodeGen/AMDGPU/syncscopes.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/AMDGPU/syncscopes.ll?rev=307722&view=auto
==============================================================================
--- llvm/trunk/test/CodeGen/AMDGPU/syncscopes.ll (added)
+++ llvm/trunk/test/CodeGen/AMDGPU/syncscopes.ll Tue Jul 11 15:23:00 2017
@@ -0,0 +1,19 @@
+; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx803 -stop-before=si-debugger-insert-nops < %s | FileCheck --check-prefix=GCN %s
+
+; GCN-LABEL: name: syncscopes
+; GCN: FLAT_STORE_DWORD killed %vgpr1_vgpr2, killed %vgpr0, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("agent") seq_cst 4 into %ir.agent_out)
+; GCN: FLAT_STORE_DWORD killed %vgpr4_vgpr5, killed %vgpr3, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("workgroup") seq_cst 4 into %ir.workgroup_out)
+; GCN: FLAT_STORE_DWORD killed %vgpr7_vgpr8, killed %vgpr6, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("wavefront") seq_cst 4 into %ir.wavefront_out)
+define void @syncscopes(
+    i32 %agent,
+    i32 addrspace(4)* %agent_out,
+    i32 %workgroup,
+    i32 addrspace(4)* %workgroup_out,
+    i32 %wavefront,
+    i32 addrspace(4)* %wavefront_out) {
+entry:
+  store atomic i32 %agent, i32 addrspace(4)* %agent_out syncscope("agent") seq_cst, align 4
+  store atomic i32 %workgroup, i32 addrspace(4)* %workgroup_out syncscope("workgroup") seq_cst, align 4
+  store atomic i32 %wavefront, i32 addrspace(4)* %wavefront_out syncscope("wavefront") seq_cst, align 4
+  ret void
+}

Modified: llvm/trunk/test/CodeGen/ARM/fence-singlethread.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/ARM/fence-singlethread.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/ARM/fence-singlethread.ll (original)
+++ llvm/trunk/test/CodeGen/ARM/fence-singlethread.ll Tue Jul 11 15:23:00 2017
@@ -11,6 +11,6 @@ define void @fence_singlethread() {
 ; CHECK: @ COMPILER BARRIER
 ; CHECK-NOT: dmb
 
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   ret void
 }

Modified: llvm/trunk/test/CodeGen/MIR/AArch64/atomic-memoperands.mir
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/MIR/AArch64/atomic-memoperands.mir?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/MIR/AArch64/atomic-memoperands.mir (original)
+++ llvm/trunk/test/CodeGen/MIR/AArch64/atomic-memoperands.mir Tue Jul 11 15:23:00 2017
@@ -14,7 +14,7 @@
 # CHECK: %3(s16) = G_LOAD %0(p0) :: (load acquire 2)
 # CHECK: G_STORE %3(s16), %0(p0) :: (store release 2)
 # CHECK: G_STORE %2(s32), %0(p0) :: (store acq_rel 4)
-# CHECK: G_STORE %1(s64), %0(p0) :: (store singlethread seq_cst 8)
+# CHECK: G_STORE %1(s64), %0(p0) :: (store syncscope("singlethread") seq_cst 8)
 name:            atomic_memoperands
 body: |
   bb.0:
@@ -25,6 +25,6 @@ body: |
     %3:_(s16) = G_LOAD %0(p0) :: (load acquire 2)
     G_STORE %3(s16), %0(p0) :: (store release 2)
     G_STORE %2(s32), %0(p0) :: (store acq_rel 4)
-    G_STORE %1(s64), %0(p0) :: (store singlethread seq_cst 8)
+    G_STORE %1(s64), %0(p0) :: (store syncscope("singlethread") seq_cst 8)
     RET_ReallyLR
 ...

Added: llvm/trunk/test/CodeGen/MIR/AMDGPU/syncscopes.mir
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/MIR/AMDGPU/syncscopes.mir?rev=307722&view=auto
==============================================================================
--- llvm/trunk/test/CodeGen/MIR/AMDGPU/syncscopes.mir (added)
+++ llvm/trunk/test/CodeGen/MIR/AMDGPU/syncscopes.mir Tue Jul 11 15:23:00 2017
@@ -0,0 +1,98 @@
+# RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx803 -run-pass=none %s -o - | FileCheck --check-prefix=GCN %s
+
+--- |
+  ; ModuleID = '<stdin>'
+  source_filename = "<stdin>"
+  target datalayout = "e-p:32:32-p1:64:64-p2:64:64-p3:32:32-p4:64:64-p5:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64"
+  target triple = "amdgcn-amd-amdhsa"
+  
+  define void @syncscopes(i32 %agent, i32 addrspace(4)* %agent_out, i32 %workgroup, i32 addrspace(4)* %workgroup_out, i32 %wavefront, i32 addrspace(4)* %wavefront_out) #0 {
+  entry:
+    store atomic i32 %agent, i32 addrspace(4)* %agent_out syncscope("agent") seq_cst, align 4
+    store atomic i32 %workgroup, i32 addrspace(4)* %workgroup_out syncscope("workgroup") seq_cst, align 4
+    store atomic i32 %wavefront, i32 addrspace(4)* %wavefront_out syncscope("wavefront") seq_cst, align 4
+    ret void
+  }
+  
+  ; Function Attrs: convergent nounwind
+  declare { i1, i64 } @llvm.amdgcn.if(i1) #1
+  
+  ; Function Attrs: convergent nounwind
+  declare { i1, i64 } @llvm.amdgcn.else(i64) #1
+  
+  ; Function Attrs: convergent nounwind readnone
+  declare i64 @llvm.amdgcn.break(i64) #2
+  
+  ; Function Attrs: convergent nounwind readnone
+  declare i64 @llvm.amdgcn.if.break(i1, i64) #2
+  
+  ; Function Attrs: convergent nounwind readnone
+  declare i64 @llvm.amdgcn.else.break(i64, i64) #2
+  
+  ; Function Attrs: convergent nounwind
+  declare i1 @llvm.amdgcn.loop(i64) #1
+  
+  ; Function Attrs: convergent nounwind
+  declare void @llvm.amdgcn.end.cf(i64) #1
+  
+  attributes #0 = { "target-cpu"="gfx803" }
+  attributes #1 = { convergent nounwind }
+  attributes #2 = { convergent nounwind readnone }
+
+# GCN-LABEL: name: syncscopes
+# GCN: FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("agent") seq_cst 4 into %ir.agent_out)
+# GCN: FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("workgroup") seq_cst 4 into %ir.workgroup_out)
+# GCN: FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("wavefront") seq_cst 4 into %ir.wavefront_out)
+...
+---
+name:            syncscopes
+alignment:       0
+exposesReturnsTwice: false
+legalized:       false
+regBankSelected: false
+selected:        false
+tracksRegLiveness: true
+liveins:         
+  - { reg: '%sgpr4_sgpr5' }
+frameInfo:       
+  isFrameAddressTaken: false
+  isReturnAddressTaken: false
+  hasStackMap:     false
+  hasPatchPoint:   false
+  stackSize:       0
+  offsetAdjustment: 0
+  maxAlignment:    0
+  adjustsStack:    false
+  hasCalls:        false
+  hasOpaqueSPAdjustment: false
+  hasVAStart:      false
+  hasMustTailInVarArgFunc: false
+body:             |
+  bb.0.entry:
+    liveins: %sgpr4_sgpr5
+  
+    S_WAITCNT 0
+    %sgpr0_sgpr1 = S_LOAD_DWORDX2_IMM %sgpr4_sgpr5, 8, 0 :: (non-temporal dereferenceable invariant load 8 from `i64 addrspace(2)* undef`)
+    %sgpr6 = S_LOAD_DWORD_IMM %sgpr4_sgpr5, 0, 0 :: (non-temporal dereferenceable invariant load 4 from `i32 addrspace(2)* undef`)
+    %sgpr2_sgpr3 = S_LOAD_DWORDX2_IMM %sgpr4_sgpr5, 24, 0 :: (non-temporal dereferenceable invariant load 8 from `i64 addrspace(2)* undef`)
+    %sgpr7 = S_LOAD_DWORD_IMM %sgpr4_sgpr5, 16, 0 :: (non-temporal dereferenceable invariant load 4 from `i32 addrspace(2)* undef`)
+    %sgpr8 = S_LOAD_DWORD_IMM %sgpr4_sgpr5, 32, 0 :: (non-temporal dereferenceable invariant load 4 from `i32 addrspace(2)* undef`)
+    S_WAITCNT 127
+    %vgpr0 = V_MOV_B32_e32 %sgpr0, implicit %exec, implicit-def %vgpr0_vgpr1, implicit %sgpr0_sgpr1
+    %sgpr4_sgpr5 = S_LOAD_DWORDX2_IMM killed %sgpr4_sgpr5, 40, 0 :: (non-temporal dereferenceable invariant load 8 from `i64 addrspace(2)* undef`)
+    %vgpr1 = V_MOV_B32_e32 killed %sgpr1, implicit %exec, implicit killed %sgpr0_sgpr1, implicit %sgpr0_sgpr1, implicit %exec
+    %vgpr2 = V_MOV_B32_e32 killed %sgpr6, implicit %exec, implicit %exec
+    FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("agent") seq_cst 4 into %ir.agent_out)
+    S_WAITCNT 112
+    %vgpr0 = V_MOV_B32_e32 %sgpr2, implicit %exec, implicit-def %vgpr0_vgpr1, implicit %sgpr2_sgpr3
+    %vgpr1 = V_MOV_B32_e32 killed %sgpr3, implicit %exec, implicit killed %sgpr2_sgpr3, implicit %sgpr2_sgpr3, implicit %exec
+    %vgpr2 = V_MOV_B32_e32 killed %sgpr7, implicit %exec, implicit %exec
+    FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("workgroup") seq_cst 4 into %ir.workgroup_out)
+    S_WAITCNT 112
+    %vgpr0 = V_MOV_B32_e32 %sgpr4, implicit %exec, implicit-def %vgpr0_vgpr1, implicit %sgpr4_sgpr5
+    %vgpr1 = V_MOV_B32_e32 killed %sgpr5, implicit %exec, implicit killed %sgpr4_sgpr5, implicit %sgpr4_sgpr5, implicit %exec
+    %vgpr2 = V_MOV_B32_e32 killed %sgpr8, implicit %exec, implicit %exec
+    FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("wavefront") seq_cst 4 into %ir.wavefront_out)
+    S_ENDPGM
+
+...

Modified: llvm/trunk/test/CodeGen/PowerPC/atomics-regression.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/CodeGen/PowerPC/atomics-regression.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/CodeGen/PowerPC/atomics-regression.ll (original)
+++ llvm/trunk/test/CodeGen/PowerPC/atomics-regression.ll Tue Jul 11 15:23:00 2017
@@ -370,7 +370,7 @@ define void @test36() {
 ; PPC64LE:       # BB#0:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  fence singlethread acquire
+  fence syncscope("singlethread") acquire
   ret void
 }
 
@@ -379,7 +379,7 @@ define void @test37() {
 ; PPC64LE:       # BB#0:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  fence singlethread release
+  fence syncscope("singlethread") release
   ret void
 }
 
@@ -388,7 +388,7 @@ define void @test38() {
 ; PPC64LE:       # BB#0:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  fence singlethread acq_rel
+  fence syncscope("singlethread") acq_rel
   ret void
 }
 
@@ -397,7 +397,7 @@ define void @test39() {
 ; PPC64LE:       # BB#0:
 ; PPC64LE-NEXT:    sync
 ; PPC64LE-NEXT:    blr
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   ret void
 }
 
@@ -1273,7 +1273,7 @@ define void @test80(i8* %ptr, i8 %cmp, i
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread monotonic monotonic
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") monotonic monotonic
   ret void
 }
 
@@ -1294,7 +1294,7 @@ define void @test81(i8* %ptr, i8 %cmp, i
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread acquire monotonic
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") acquire monotonic
   ret void
 }
 
@@ -1315,7 +1315,7 @@ define void @test82(i8* %ptr, i8 %cmp, i
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread acquire acquire
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") acquire acquire
   ret void
 }
 
@@ -1336,7 +1336,7 @@ define void @test83(i8* %ptr, i8 %cmp, i
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread release monotonic
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") release monotonic
   ret void
 }
 
@@ -1357,7 +1357,7 @@ define void @test84(i8* %ptr, i8 %cmp, i
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread release acquire
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") release acquire
   ret void
 }
 
@@ -1379,7 +1379,7 @@ define void @test85(i8* %ptr, i8 %cmp, i
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread acq_rel monotonic
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") acq_rel monotonic
   ret void
 }
 
@@ -1401,7 +1401,7 @@ define void @test86(i8* %ptr, i8 %cmp, i
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread acq_rel acquire
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") acq_rel acquire
   ret void
 }
 
@@ -1423,7 +1423,7 @@ define void @test87(i8* %ptr, i8 %cmp, i
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread seq_cst monotonic
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") seq_cst monotonic
   ret void
 }
 
@@ -1445,7 +1445,7 @@ define void @test88(i8* %ptr, i8 %cmp, i
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread seq_cst acquire
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") seq_cst acquire
   ret void
 }
 
@@ -1467,7 +1467,7 @@ define void @test89(i8* %ptr, i8 %cmp, i
 ; PPC64LE-NEXT:    stbcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val singlethread seq_cst seq_cst
+  %res = cmpxchg i8* %ptr, i8 %cmp, i8 %val syncscope("singlethread") seq_cst seq_cst
   ret void
 }
 
@@ -1487,7 +1487,7 @@ define void @test90(i16* %ptr, i16 %cmp,
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread monotonic monotonic
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") monotonic monotonic
   ret void
 }
 
@@ -1508,7 +1508,7 @@ define void @test91(i16* %ptr, i16 %cmp,
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread acquire monotonic
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") acquire monotonic
   ret void
 }
 
@@ -1529,7 +1529,7 @@ define void @test92(i16* %ptr, i16 %cmp,
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread acquire acquire
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") acquire acquire
   ret void
 }
 
@@ -1550,7 +1550,7 @@ define void @test93(i16* %ptr, i16 %cmp,
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread release monotonic
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") release monotonic
   ret void
 }
 
@@ -1571,7 +1571,7 @@ define void @test94(i16* %ptr, i16 %cmp,
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread release acquire
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") release acquire
   ret void
 }
 
@@ -1593,7 +1593,7 @@ define void @test95(i16* %ptr, i16 %cmp,
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread acq_rel monotonic
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") acq_rel monotonic
   ret void
 }
 
@@ -1615,7 +1615,7 @@ define void @test96(i16* %ptr, i16 %cmp,
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread acq_rel acquire
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") acq_rel acquire
   ret void
 }
 
@@ -1637,7 +1637,7 @@ define void @test97(i16* %ptr, i16 %cmp,
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread seq_cst monotonic
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") seq_cst monotonic
   ret void
 }
 
@@ -1659,7 +1659,7 @@ define void @test98(i16* %ptr, i16 %cmp,
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread seq_cst acquire
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") seq_cst acquire
   ret void
 }
 
@@ -1681,7 +1681,7 @@ define void @test99(i16* %ptr, i16 %cmp,
 ; PPC64LE-NEXT:    sthcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val singlethread seq_cst seq_cst
+  %res = cmpxchg i16* %ptr, i16 %cmp, i16 %val syncscope("singlethread") seq_cst seq_cst
   ret void
 }
 
@@ -1701,7 +1701,7 @@ define void @test100(i32* %ptr, i32 %cmp
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread monotonic monotonic
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") monotonic monotonic
   ret void
 }
 
@@ -1722,7 +1722,7 @@ define void @test101(i32* %ptr, i32 %cmp
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread acquire monotonic
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") acquire monotonic
   ret void
 }
 
@@ -1743,7 +1743,7 @@ define void @test102(i32* %ptr, i32 %cmp
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread acquire acquire
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") acquire acquire
   ret void
 }
 
@@ -1764,7 +1764,7 @@ define void @test103(i32* %ptr, i32 %cmp
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread release monotonic
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") release monotonic
   ret void
 }
 
@@ -1785,7 +1785,7 @@ define void @test104(i32* %ptr, i32 %cmp
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread release acquire
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") release acquire
   ret void
 }
 
@@ -1807,7 +1807,7 @@ define void @test105(i32* %ptr, i32 %cmp
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread acq_rel monotonic
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") acq_rel monotonic
   ret void
 }
 
@@ -1829,7 +1829,7 @@ define void @test106(i32* %ptr, i32 %cmp
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread acq_rel acquire
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") acq_rel acquire
   ret void
 }
 
@@ -1851,7 +1851,7 @@ define void @test107(i32* %ptr, i32 %cmp
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread seq_cst monotonic
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") seq_cst monotonic
   ret void
 }
 
@@ -1873,7 +1873,7 @@ define void @test108(i32* %ptr, i32 %cmp
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread seq_cst acquire
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") seq_cst acquire
   ret void
 }
 
@@ -1895,7 +1895,7 @@ define void @test109(i32* %ptr, i32 %cmp
 ; PPC64LE-NEXT:    stwcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val singlethread seq_cst seq_cst
+  %res = cmpxchg i32* %ptr, i32 %cmp, i32 %val syncscope("singlethread") seq_cst seq_cst
   ret void
 }
 
@@ -1915,7 +1915,7 @@ define void @test110(i64* %ptr, i64 %cmp
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread monotonic monotonic
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") monotonic monotonic
   ret void
 }
 
@@ -1936,7 +1936,7 @@ define void @test111(i64* %ptr, i64 %cmp
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread acquire monotonic
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") acquire monotonic
   ret void
 }
 
@@ -1957,7 +1957,7 @@ define void @test112(i64* %ptr, i64 %cmp
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread acquire acquire
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") acquire acquire
   ret void
 }
 
@@ -1978,7 +1978,7 @@ define void @test113(i64* %ptr, i64 %cmp
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread release monotonic
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") release monotonic
   ret void
 }
 
@@ -1999,7 +1999,7 @@ define void @test114(i64* %ptr, i64 %cmp
 ; PPC64LE-NEXT:  # BB#3:
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread release acquire
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") release acquire
   ret void
 }
 
@@ -2021,7 +2021,7 @@ define void @test115(i64* %ptr, i64 %cmp
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread acq_rel monotonic
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") acq_rel monotonic
   ret void
 }
 
@@ -2043,7 +2043,7 @@ define void @test116(i64* %ptr, i64 %cmp
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread acq_rel acquire
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") acq_rel acquire
   ret void
 }
 
@@ -2065,7 +2065,7 @@ define void @test117(i64* %ptr, i64 %cmp
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread seq_cst monotonic
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") seq_cst monotonic
   ret void
 }
 
@@ -2087,7 +2087,7 @@ define void @test118(i64* %ptr, i64 %cmp
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread seq_cst acquire
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") seq_cst acquire
   ret void
 }
 
@@ -2109,7 +2109,7 @@ define void @test119(i64* %ptr, i64 %cmp
 ; PPC64LE-NEXT:    stdcx. 6, 0, 3
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val singlethread seq_cst seq_cst
+  %res = cmpxchg i64* %ptr, i64 %cmp, i64 %val syncscope("singlethread") seq_cst seq_cst
   ret void
 }
 
@@ -5847,7 +5847,7 @@ define i8 @test340(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw xchg i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -5862,7 +5862,7 @@ define i8 @test341(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw xchg i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -5877,7 +5877,7 @@ define i8 @test342(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw xchg i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -5893,7 +5893,7 @@ define i8 @test343(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw xchg i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -5909,7 +5909,7 @@ define i8 @test344(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw xchg i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -5923,7 +5923,7 @@ define i16 @test345(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw xchg i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -5938,7 +5938,7 @@ define i16 @test346(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw xchg i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -5953,7 +5953,7 @@ define i16 @test347(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw xchg i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -5969,7 +5969,7 @@ define i16 @test348(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw xchg i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -5985,7 +5985,7 @@ define i16 @test349(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw xchg i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -5999,7 +5999,7 @@ define i32 @test350(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw xchg i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -6014,7 +6014,7 @@ define i32 @test351(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw xchg i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -6029,7 +6029,7 @@ define i32 @test352(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw xchg i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -6045,7 +6045,7 @@ define i32 @test353(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw xchg i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -6061,7 +6061,7 @@ define i32 @test354(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw xchg i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -6075,7 +6075,7 @@ define i64 @test355(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw xchg i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -6090,7 +6090,7 @@ define i64 @test356(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw xchg i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -6105,7 +6105,7 @@ define i64 @test357(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw xchg i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -6121,7 +6121,7 @@ define i64 @test358(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw xchg i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -6137,7 +6137,7 @@ define i64 @test359(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xchg i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw xchg i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -6152,7 +6152,7 @@ define i8 @test360(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw add i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -6168,7 +6168,7 @@ define i8 @test361(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw add i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -6184,7 +6184,7 @@ define i8 @test362(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw add i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -6201,7 +6201,7 @@ define i8 @test363(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw add i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -6218,7 +6218,7 @@ define i8 @test364(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw add i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -6233,7 +6233,7 @@ define i16 @test365(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw add i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -6249,7 +6249,7 @@ define i16 @test366(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw add i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -6265,7 +6265,7 @@ define i16 @test367(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw add i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -6282,7 +6282,7 @@ define i16 @test368(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw add i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -6299,7 +6299,7 @@ define i16 @test369(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw add i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -6314,7 +6314,7 @@ define i32 @test370(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw add i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -6330,7 +6330,7 @@ define i32 @test371(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw add i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -6346,7 +6346,7 @@ define i32 @test372(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw add i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -6363,7 +6363,7 @@ define i32 @test373(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw add i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -6380,7 +6380,7 @@ define i32 @test374(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw add i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -6395,7 +6395,7 @@ define i64 @test375(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw add i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -6411,7 +6411,7 @@ define i64 @test376(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw add i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -6427,7 +6427,7 @@ define i64 @test377(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw add i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -6444,7 +6444,7 @@ define i64 @test378(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw add i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -6461,7 +6461,7 @@ define i64 @test379(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw add i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw add i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -6476,7 +6476,7 @@ define i8 @test380(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw sub i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -6492,7 +6492,7 @@ define i8 @test381(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw sub i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -6508,7 +6508,7 @@ define i8 @test382(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw sub i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -6525,7 +6525,7 @@ define i8 @test383(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw sub i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -6542,7 +6542,7 @@ define i8 @test384(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw sub i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -6557,7 +6557,7 @@ define i16 @test385(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw sub i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -6573,7 +6573,7 @@ define i16 @test386(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw sub i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -6589,7 +6589,7 @@ define i16 @test387(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw sub i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -6606,7 +6606,7 @@ define i16 @test388(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw sub i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -6623,7 +6623,7 @@ define i16 @test389(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw sub i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -6638,7 +6638,7 @@ define i32 @test390(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw sub i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -6654,7 +6654,7 @@ define i32 @test391(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw sub i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -6670,7 +6670,7 @@ define i32 @test392(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw sub i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -6687,7 +6687,7 @@ define i32 @test393(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw sub i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -6704,7 +6704,7 @@ define i32 @test394(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw sub i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -6719,7 +6719,7 @@ define i64 @test395(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw sub i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -6735,7 +6735,7 @@ define i64 @test396(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw sub i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -6751,7 +6751,7 @@ define i64 @test397(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw sub i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -6768,7 +6768,7 @@ define i64 @test398(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw sub i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -6785,7 +6785,7 @@ define i64 @test399(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw sub i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw sub i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -6800,7 +6800,7 @@ define i8 @test400(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw and i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -6816,7 +6816,7 @@ define i8 @test401(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw and i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -6832,7 +6832,7 @@ define i8 @test402(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw and i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -6849,7 +6849,7 @@ define i8 @test403(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw and i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -6866,7 +6866,7 @@ define i8 @test404(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw and i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -6881,7 +6881,7 @@ define i16 @test405(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw and i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -6897,7 +6897,7 @@ define i16 @test406(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw and i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -6913,7 +6913,7 @@ define i16 @test407(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw and i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -6930,7 +6930,7 @@ define i16 @test408(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw and i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -6947,7 +6947,7 @@ define i16 @test409(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw and i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -6962,7 +6962,7 @@ define i32 @test410(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw and i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -6978,7 +6978,7 @@ define i32 @test411(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw and i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -6994,7 +6994,7 @@ define i32 @test412(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw and i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -7011,7 +7011,7 @@ define i32 @test413(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw and i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -7028,7 +7028,7 @@ define i32 @test414(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw and i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -7043,7 +7043,7 @@ define i64 @test415(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw and i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -7059,7 +7059,7 @@ define i64 @test416(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw and i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -7075,7 +7075,7 @@ define i64 @test417(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw and i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -7092,7 +7092,7 @@ define i64 @test418(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw and i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -7109,7 +7109,7 @@ define i64 @test419(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw and i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw and i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -7124,7 +7124,7 @@ define i8 @test420(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw nand i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -7140,7 +7140,7 @@ define i8 @test421(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw nand i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -7156,7 +7156,7 @@ define i8 @test422(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw nand i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -7173,7 +7173,7 @@ define i8 @test423(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw nand i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -7190,7 +7190,7 @@ define i8 @test424(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw nand i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -7205,7 +7205,7 @@ define i16 @test425(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw nand i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -7221,7 +7221,7 @@ define i16 @test426(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw nand i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -7237,7 +7237,7 @@ define i16 @test427(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw nand i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -7254,7 +7254,7 @@ define i16 @test428(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw nand i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -7271,7 +7271,7 @@ define i16 @test429(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw nand i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -7286,7 +7286,7 @@ define i32 @test430(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw nand i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -7302,7 +7302,7 @@ define i32 @test431(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw nand i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -7318,7 +7318,7 @@ define i32 @test432(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw nand i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -7335,7 +7335,7 @@ define i32 @test433(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw nand i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -7352,7 +7352,7 @@ define i32 @test434(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw nand i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -7367,7 +7367,7 @@ define i64 @test435(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw nand i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -7383,7 +7383,7 @@ define i64 @test436(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw nand i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -7399,7 +7399,7 @@ define i64 @test437(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw nand i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -7416,7 +7416,7 @@ define i64 @test438(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw nand i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -7433,7 +7433,7 @@ define i64 @test439(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw nand i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw nand i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -7448,7 +7448,7 @@ define i8 @test440(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw or i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -7464,7 +7464,7 @@ define i8 @test441(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw or i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -7480,7 +7480,7 @@ define i8 @test442(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw or i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -7497,7 +7497,7 @@ define i8 @test443(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw or i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -7514,7 +7514,7 @@ define i8 @test444(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw or i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -7529,7 +7529,7 @@ define i16 @test445(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw or i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -7545,7 +7545,7 @@ define i16 @test446(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw or i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -7561,7 +7561,7 @@ define i16 @test447(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw or i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -7578,7 +7578,7 @@ define i16 @test448(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw or i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -7595,7 +7595,7 @@ define i16 @test449(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw or i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -7610,7 +7610,7 @@ define i32 @test450(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw or i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -7626,7 +7626,7 @@ define i32 @test451(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw or i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -7642,7 +7642,7 @@ define i32 @test452(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw or i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -7659,7 +7659,7 @@ define i32 @test453(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw or i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -7676,7 +7676,7 @@ define i32 @test454(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw or i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -7691,7 +7691,7 @@ define i64 @test455(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw or i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -7707,7 +7707,7 @@ define i64 @test456(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw or i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -7723,7 +7723,7 @@ define i64 @test457(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw or i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -7740,7 +7740,7 @@ define i64 @test458(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw or i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -7757,7 +7757,7 @@ define i64 @test459(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw or i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw or i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -7772,7 +7772,7 @@ define i8 @test460(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw xor i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -7788,7 +7788,7 @@ define i8 @test461(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw xor i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -7804,7 +7804,7 @@ define i8 @test462(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw xor i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -7821,7 +7821,7 @@ define i8 @test463(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw xor i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -7838,7 +7838,7 @@ define i8 @test464(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw xor i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -7853,7 +7853,7 @@ define i16 @test465(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw xor i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -7869,7 +7869,7 @@ define i16 @test466(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw xor i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -7885,7 +7885,7 @@ define i16 @test467(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw xor i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -7902,7 +7902,7 @@ define i16 @test468(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw xor i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -7919,7 +7919,7 @@ define i16 @test469(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw xor i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -7934,7 +7934,7 @@ define i32 @test470(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw xor i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -7950,7 +7950,7 @@ define i32 @test471(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw xor i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -7966,7 +7966,7 @@ define i32 @test472(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw xor i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -7983,7 +7983,7 @@ define i32 @test473(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw xor i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -8000,7 +8000,7 @@ define i32 @test474(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw xor i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -8015,7 +8015,7 @@ define i64 @test475(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw xor i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -8031,7 +8031,7 @@ define i64 @test476(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw xor i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -8047,7 +8047,7 @@ define i64 @test477(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  # BB#2:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw xor i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -8064,7 +8064,7 @@ define i64 @test478(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw xor i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -8081,7 +8081,7 @@ define i64 @test479(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw xor i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw xor i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -8099,7 +8099,7 @@ define i8 @test480(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  .LBB480_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw max i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -8118,7 +8118,7 @@ define i8 @test481(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  .LBB481_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw max i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -8137,7 +8137,7 @@ define i8 @test482(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  .LBB482_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw max i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -8157,7 +8157,7 @@ define i8 @test483(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw max i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -8177,7 +8177,7 @@ define i8 @test484(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw max i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -8195,7 +8195,7 @@ define i16 @test485(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  .LBB485_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw max i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -8214,7 +8214,7 @@ define i16 @test486(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  .LBB486_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw max i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -8233,7 +8233,7 @@ define i16 @test487(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  .LBB487_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw max i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -8253,7 +8253,7 @@ define i16 @test488(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw max i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -8273,7 +8273,7 @@ define i16 @test489(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw max i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -8290,7 +8290,7 @@ define i32 @test490(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  .LBB490_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw max i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -8308,7 +8308,7 @@ define i32 @test491(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  .LBB491_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw max i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -8326,7 +8326,7 @@ define i32 @test492(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  .LBB492_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw max i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -8345,7 +8345,7 @@ define i32 @test493(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw max i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -8364,7 +8364,7 @@ define i32 @test494(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw max i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -8381,7 +8381,7 @@ define i64 @test495(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  .LBB495_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw max i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -8399,7 +8399,7 @@ define i64 @test496(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  .LBB496_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw max i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -8417,7 +8417,7 @@ define i64 @test497(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  .LBB497_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw max i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -8436,7 +8436,7 @@ define i64 @test498(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw max i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -8455,7 +8455,7 @@ define i64 @test499(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw max i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw max i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -8473,7 +8473,7 @@ define i8 @test500(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  .LBB500_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw min i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -8492,7 +8492,7 @@ define i8 @test501(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  .LBB501_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw min i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -8511,7 +8511,7 @@ define i8 @test502(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  .LBB502_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw min i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -8531,7 +8531,7 @@ define i8 @test503(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw min i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -8551,7 +8551,7 @@ define i8 @test504(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw min i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -8569,7 +8569,7 @@ define i16 @test505(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  .LBB505_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw min i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -8588,7 +8588,7 @@ define i16 @test506(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  .LBB506_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw min i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -8607,7 +8607,7 @@ define i16 @test507(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  .LBB507_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw min i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -8627,7 +8627,7 @@ define i16 @test508(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw min i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -8647,7 +8647,7 @@ define i16 @test509(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw min i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -8664,7 +8664,7 @@ define i32 @test510(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  .LBB510_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw min i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -8682,7 +8682,7 @@ define i32 @test511(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  .LBB511_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw min i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -8700,7 +8700,7 @@ define i32 @test512(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  .LBB512_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw min i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -8719,7 +8719,7 @@ define i32 @test513(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw min i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -8738,7 +8738,7 @@ define i32 @test514(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw min i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -8755,7 +8755,7 @@ define i64 @test515(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  .LBB515_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw min i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -8773,7 +8773,7 @@ define i64 @test516(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  .LBB516_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw min i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -8791,7 +8791,7 @@ define i64 @test517(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  .LBB517_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw min i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -8810,7 +8810,7 @@ define i64 @test518(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw min i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -8829,7 +8829,7 @@ define i64 @test519(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw min i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw min i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -8846,7 +8846,7 @@ define i8 @test520(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  .LBB520_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw umax i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -8864,7 +8864,7 @@ define i8 @test521(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  .LBB521_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw umax i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -8882,7 +8882,7 @@ define i8 @test522(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  .LBB522_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw umax i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -8901,7 +8901,7 @@ define i8 @test523(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw umax i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -8920,7 +8920,7 @@ define i8 @test524(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw umax i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -8937,7 +8937,7 @@ define i16 @test525(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  .LBB525_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw umax i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -8955,7 +8955,7 @@ define i16 @test526(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  .LBB526_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw umax i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -8973,7 +8973,7 @@ define i16 @test527(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  .LBB527_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw umax i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -8992,7 +8992,7 @@ define i16 @test528(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw umax i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -9011,7 +9011,7 @@ define i16 @test529(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw umax i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -9028,7 +9028,7 @@ define i32 @test530(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  .LBB530_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw umax i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -9046,7 +9046,7 @@ define i32 @test531(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  .LBB531_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw umax i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -9064,7 +9064,7 @@ define i32 @test532(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  .LBB532_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw umax i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -9083,7 +9083,7 @@ define i32 @test533(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw umax i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -9102,7 +9102,7 @@ define i32 @test534(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw umax i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -9119,7 +9119,7 @@ define i64 @test535(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  .LBB535_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw umax i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -9137,7 +9137,7 @@ define i64 @test536(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  .LBB536_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw umax i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -9155,7 +9155,7 @@ define i64 @test537(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  .LBB537_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw umax i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -9174,7 +9174,7 @@ define i64 @test538(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw umax i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -9193,7 +9193,7 @@ define i64 @test539(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umax i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw umax i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 
@@ -9210,7 +9210,7 @@ define i8 @test540(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  .LBB540_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i8* %ptr, i8 %val singlethread monotonic
+  %ret = atomicrmw umin i8* %ptr, i8 %val syncscope("singlethread") monotonic
   ret i8 %ret
 }
 
@@ -9228,7 +9228,7 @@ define i8 @test541(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  .LBB541_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i8* %ptr, i8 %val singlethread acquire
+  %ret = atomicrmw umin i8* %ptr, i8 %val syncscope("singlethread") acquire
   ret i8 %ret
 }
 
@@ -9246,7 +9246,7 @@ define i8 @test542(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:  .LBB542_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i8* %ptr, i8 %val singlethread release
+  %ret = atomicrmw umin i8* %ptr, i8 %val syncscope("singlethread") release
   ret i8 %ret
 }
 
@@ -9265,7 +9265,7 @@ define i8 @test543(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i8* %ptr, i8 %val singlethread acq_rel
+  %ret = atomicrmw umin i8* %ptr, i8 %val syncscope("singlethread") acq_rel
   ret i8 %ret
 }
 
@@ -9284,7 +9284,7 @@ define i8 @test544(i8* %ptr, i8 %val) {
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i8* %ptr, i8 %val singlethread seq_cst
+  %ret = atomicrmw umin i8* %ptr, i8 %val syncscope("singlethread") seq_cst
   ret i8 %ret
 }
 
@@ -9301,7 +9301,7 @@ define i16 @test545(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  .LBB545_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i16* %ptr, i16 %val singlethread monotonic
+  %ret = atomicrmw umin i16* %ptr, i16 %val syncscope("singlethread") monotonic
   ret i16 %ret
 }
 
@@ -9319,7 +9319,7 @@ define i16 @test546(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  .LBB546_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i16* %ptr, i16 %val singlethread acquire
+  %ret = atomicrmw umin i16* %ptr, i16 %val syncscope("singlethread") acquire
   ret i16 %ret
 }
 
@@ -9337,7 +9337,7 @@ define i16 @test547(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:  .LBB547_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i16* %ptr, i16 %val singlethread release
+  %ret = atomicrmw umin i16* %ptr, i16 %val syncscope("singlethread") release
   ret i16 %ret
 }
 
@@ -9356,7 +9356,7 @@ define i16 @test548(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i16* %ptr, i16 %val singlethread acq_rel
+  %ret = atomicrmw umin i16* %ptr, i16 %val syncscope("singlethread") acq_rel
   ret i16 %ret
 }
 
@@ -9375,7 +9375,7 @@ define i16 @test549(i16* %ptr, i16 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i16* %ptr, i16 %val singlethread seq_cst
+  %ret = atomicrmw umin i16* %ptr, i16 %val syncscope("singlethread") seq_cst
   ret i16 %ret
 }
 
@@ -9392,7 +9392,7 @@ define i32 @test550(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  .LBB550_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i32* %ptr, i32 %val singlethread monotonic
+  %ret = atomicrmw umin i32* %ptr, i32 %val syncscope("singlethread") monotonic
   ret i32 %ret
 }
 
@@ -9410,7 +9410,7 @@ define i32 @test551(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  .LBB551_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i32* %ptr, i32 %val singlethread acquire
+  %ret = atomicrmw umin i32* %ptr, i32 %val syncscope("singlethread") acquire
   ret i32 %ret
 }
 
@@ -9428,7 +9428,7 @@ define i32 @test552(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:  .LBB552_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i32* %ptr, i32 %val singlethread release
+  %ret = atomicrmw umin i32* %ptr, i32 %val syncscope("singlethread") release
   ret i32 %ret
 }
 
@@ -9447,7 +9447,7 @@ define i32 @test553(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i32* %ptr, i32 %val singlethread acq_rel
+  %ret = atomicrmw umin i32* %ptr, i32 %val syncscope("singlethread") acq_rel
   ret i32 %ret
 }
 
@@ -9466,7 +9466,7 @@ define i32 @test554(i32* %ptr, i32 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i32* %ptr, i32 %val singlethread seq_cst
+  %ret = atomicrmw umin i32* %ptr, i32 %val syncscope("singlethread") seq_cst
   ret i32 %ret
 }
 
@@ -9483,7 +9483,7 @@ define i64 @test555(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  .LBB555_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i64* %ptr, i64 %val singlethread monotonic
+  %ret = atomicrmw umin i64* %ptr, i64 %val syncscope("singlethread") monotonic
   ret i64 %ret
 }
 
@@ -9501,7 +9501,7 @@ define i64 @test556(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  .LBB556_3:
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i64* %ptr, i64 %val singlethread acquire
+  %ret = atomicrmw umin i64* %ptr, i64 %val syncscope("singlethread") acquire
   ret i64 %ret
 }
 
@@ -9519,7 +9519,7 @@ define i64 @test557(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:  .LBB557_3:
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i64* %ptr, i64 %val singlethread release
+  %ret = atomicrmw umin i64* %ptr, i64 %val syncscope("singlethread") release
   ret i64 %ret
 }
 
@@ -9538,7 +9538,7 @@ define i64 @test558(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i64* %ptr, i64 %val singlethread acq_rel
+  %ret = atomicrmw umin i64* %ptr, i64 %val syncscope("singlethread") acq_rel
   ret i64 %ret
 }
 
@@ -9557,7 +9557,7 @@ define i64 @test559(i64* %ptr, i64 %val)
 ; PPC64LE-NEXT:    mr 3, 5
 ; PPC64LE-NEXT:    lwsync
 ; PPC64LE-NEXT:    blr
-  %ret = atomicrmw umin i64* %ptr, i64 %val singlethread seq_cst
+  %ret = atomicrmw umin i64* %ptr, i64 %val syncscope("singlethread") seq_cst
   ret i64 %ret
 }
 

Modified: llvm/trunk/test/Instrumentation/ThreadSanitizer/atomic.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Instrumentation/ThreadSanitizer/atomic.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/Instrumentation/ThreadSanitizer/atomic.ll (original)
+++ llvm/trunk/test/Instrumentation/ThreadSanitizer/atomic.ll Tue Jul 11 15:23:00 2017
@@ -1959,7 +1959,7 @@ entry:
 
 define void @atomic_signal_fence_acquire() nounwind uwtable {
 entry:
-  fence singlethread acquire, !dbg !7
+  fence syncscope("singlethread") acquire, !dbg !7
   ret void, !dbg !7
 }
 ; CHECK-LABEL: atomic_signal_fence_acquire
@@ -1975,7 +1975,7 @@ entry:
 
 define void @atomic_signal_fence_release() nounwind uwtable {
 entry:
-  fence singlethread release, !dbg !7
+  fence syncscope("singlethread") release, !dbg !7
   ret void, !dbg !7
 }
 ; CHECK-LABEL: atomic_signal_fence_release
@@ -1991,7 +1991,7 @@ entry:
 
 define void @atomic_signal_fence_acq_rel() nounwind uwtable {
 entry:
-  fence singlethread acq_rel, !dbg !7
+  fence syncscope("singlethread") acq_rel, !dbg !7
   ret void, !dbg !7
 }
 ; CHECK-LABEL: atomic_signal_fence_acq_rel
@@ -2007,7 +2007,7 @@ entry:
 
 define void @atomic_signal_fence_seq_cst() nounwind uwtable {
 entry:
-  fence singlethread seq_cst, !dbg !7
+  fence syncscope("singlethread") seq_cst, !dbg !7
   ret void, !dbg !7
 }
 ; CHECK-LABEL: atomic_signal_fence_seq_cst

Added: llvm/trunk/test/Linker/Inputs/syncscope-1.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Linker/Inputs/syncscope-1.ll?rev=307722&view=auto
==============================================================================
--- llvm/trunk/test/Linker/Inputs/syncscope-1.ll (added)
+++ llvm/trunk/test/Linker/Inputs/syncscope-1.ll Tue Jul 11 15:23:00 2017
@@ -0,0 +1,6 @@
+define void @syncscope_1() {
+  fence syncscope("agent") seq_cst
+  fence syncscope("workgroup") seq_cst
+  fence syncscope("wavefront") seq_cst
+  ret void
+}

Added: llvm/trunk/test/Linker/Inputs/syncscope-2.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Linker/Inputs/syncscope-2.ll?rev=307722&view=auto
==============================================================================
--- llvm/trunk/test/Linker/Inputs/syncscope-2.ll (added)
+++ llvm/trunk/test/Linker/Inputs/syncscope-2.ll Tue Jul 11 15:23:00 2017
@@ -0,0 +1,6 @@
+define void @syncscope_2() {
+  fence syncscope("image") seq_cst
+  fence syncscope("agent") seq_cst
+  fence syncscope("workgroup") seq_cst
+  ret void
+}

Added: llvm/trunk/test/Linker/syncscopes.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Linker/syncscopes.ll?rev=307722&view=auto
==============================================================================
--- llvm/trunk/test/Linker/syncscopes.ll (added)
+++ llvm/trunk/test/Linker/syncscopes.ll Tue Jul 11 15:23:00 2017
@@ -0,0 +1,11 @@
+; RUN: llvm-link %S/Inputs/syncscope-1.ll %S/Inputs/syncscope-2.ll -S | FileCheck %s
+
+; CHECK-LABEL: define void @syncscope_1
+; CHECK: fence syncscope("agent") seq_cst
+; CHECK: fence syncscope("workgroup") seq_cst
+; CHECK: fence syncscope("wavefront") seq_cst
+
+; CHECK-LABEL: define void @syncscope_2
+; CHECK: fence syncscope("image") seq_cst
+; CHECK: fence syncscope("agent") seq_cst
+; CHECK: fence syncscope("workgroup") seq_cst

Modified: llvm/trunk/test/Transforms/GVN/PRE/atomic.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/GVN/PRE/atomic.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/Transforms/GVN/PRE/atomic.ll (original)
+++ llvm/trunk/test/Transforms/GVN/PRE/atomic.ll Tue Jul 11 15:23:00 2017
@@ -208,14 +208,14 @@ define void @fence_seq_cst(i32* %P1, i32
   ret void
 }
 
-; Can't DSE across a full singlethread fence
+; Can't DSE across a full syncscope("singlethread") fence
 define void @fence_seq_cst_st(i32* %P1, i32* %P2) {
 ; CHECK-LABEL: @fence_seq_cst_st(
 ; CHECK: store
-; CHECK: fence singlethread seq_cst
+; CHECK: fence syncscope("singlethread") seq_cst
 ; CHECK: store
   store i32 0, i32* %P1, align 4
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   store i32 0, i32* %P1, align 4
   ret void
 }

Modified: llvm/trunk/test/Transforms/InstCombine/consecutive-fences.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/InstCombine/consecutive-fences.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/Transforms/InstCombine/consecutive-fences.ll (original)
+++ llvm/trunk/test/Transforms/InstCombine/consecutive-fences.ll Tue Jul 11 15:23:00 2017
@@ -4,7 +4,7 @@
 
 ; CHECK-LABEL: define void @tinkywinky
 ; CHECK-NEXT:   fence seq_cst
-; CHECK-NEXT:   fence singlethread acquire
+; CHECK-NEXT:   fence syncscope("singlethread") acquire
 ; CHECK-NEXT:   ret void
 ; CHECK-NEXT: }
 
@@ -12,21 +12,21 @@ define void @tinkywinky() {
   fence seq_cst
   fence seq_cst
   fence seq_cst
-  fence singlethread acquire
-  fence singlethread acquire
-  fence singlethread acquire
+  fence syncscope("singlethread") acquire
+  fence syncscope("singlethread") acquire
+  fence syncscope("singlethread") acquire
   ret void
 }
 
 ; CHECK-LABEL: define void @dipsy
 ; CHECK-NEXT:   fence seq_cst
-; CHECK-NEXT:   fence singlethread seq_cst
+; CHECK-NEXT:   fence syncscope("singlethread") seq_cst
 ; CHECK-NEXT:   ret void
 ; CHECK-NEXT: }
 
 define void @dipsy() {
   fence seq_cst
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   ret void
 }
 

Modified: llvm/trunk/test/Transforms/Sink/fence.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/Sink/fence.ll?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/test/Transforms/Sink/fence.ll (original)
+++ llvm/trunk/test/Transforms/Sink/fence.ll Tue Jul 11 15:23:00 2017
@@ -5,9 +5,9 @@ target triple = "x86_64-unknown-linux-gn
 define void @test1(i32* ()*) {
 entry:
   %1 = call i32* %0() #0
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   %2 = load i32, i32* %1, align 4
-  fence singlethread seq_cst
+  fence syncscope("singlethread") seq_cst
   %3 = icmp eq i32 %2, 0
   br i1 %3, label %fail, label %pass
 
@@ -20,9 +20,9 @@ pass:
 
 ; CHECK-LABEL: @test1(
 ; CHECK:  %[[call:.*]] = call i32* %0()
-; CHECK:  fence singlethread seq_cst
+; CHECK:  fence syncscope("singlethread") seq_cst
 ; CHECK:  load i32, i32* %[[call]], align 4
-; CHECK:  fence singlethread seq_cst
+; CHECK:  fence syncscope("singlethread") seq_cst
 
 
 attributes #0 = { nounwind readnone }

Modified: llvm/trunk/unittests/Analysis/AliasAnalysisTest.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/unittests/Analysis/AliasAnalysisTest.cpp?rev=307722&r1=307721&r2=307722&view=diff
==============================================================================
--- llvm/trunk/unittests/Analysis/AliasAnalysisTest.cpp (original)
+++ llvm/trunk/unittests/Analysis/AliasAnalysisTest.cpp Tue Jul 11 15:23:00 2017
@@ -180,10 +180,11 @@ TEST_F(AliasAnalysisTest, getModRefInfo)
   auto *VAArg1 = new VAArgInst(Addr, PtrType, "vaarg", BB);
   auto *CmpXChg1 = new AtomicCmpXchgInst(
       Addr, ConstantInt::get(IntType, 0), ConstantInt::get(IntType, 1),
-      AtomicOrdering::Monotonic, AtomicOrdering::Monotonic, CrossThread, BB);
+      AtomicOrdering::Monotonic, AtomicOrdering::Monotonic,
+      SyncScope::System, BB);
   auto *AtomicRMW =
       new AtomicRMWInst(AtomicRMWInst::Xchg, Addr, ConstantInt::get(IntType, 1),
-                        AtomicOrdering::Monotonic, CrossThread, BB);
+                        AtomicOrdering::Monotonic, SyncScope::System, BB);
 
   ReturnInst::Create(C, nullptr, BB);
 




More information about the llvm-commits mailing list