[clang] Add clang atomic control options and attribute (PR #114841)
Yaxun Liu via cfe-commits
cfe-commits at lists.llvm.org
Wed Feb 5 08:09:58 PST 2025
https://github.com/yxsamliu updated https://github.com/llvm/llvm-project/pull/114841
>From 85b6a4cc6d7eb329a28b864ad1d1e10007053500 Mon Sep 17 00:00:00 2001
From: "Yaxun (Sam) Liu" <yaxun.liu at amd.com>
Date: Wed, 17 Jul 2024 09:39:28 -0400
Subject: [PATCH] Add clang atomic control options and attribute
Add option and statement attribute for controlling emitting of target-specific
metadata to atomicrmw instructions in IR.
The RFC for this attribute and option is
https://discourse.llvm.org/t/rfc-add-clang-atomic-control-options-and-pragmas/80641,
Originally a pragma was proposed, then it was changed to clang attribute.
This attribute allows users to specify one, two, or all three options and must be applied
to a compound statement. The attribute can also be nested, with inner attributes
overriding the options specified by outer attributes or the target's default
options. These options will then determine the target-specific metadata added to atomic
instructions in the IR.
In addition to the attribute, a new compiler option is introduced:
-fatomic=no_remote_memory:{on|off},no_fine_grained_memory:{on|off},ignore_denormal_mode{on|off}.
This compiler option allows users to override the target's default options through the
Clang driver and front end.
In terms of implementation, the atomic attribute is represented in the AST by the
existing AttributedStmt, with minimal changes to AST and Sema.
During code generation in Clang, the CodeGenModule maintains the current atomic options,
which are used to emit the relevant metadata for atomic instructions. RAII is used
to manage the saving and restoring of atomic options when entering
and exiting nested AttributedStmt.
---
clang/docs/LanguageExtensions.rst | 145 ++++++
clang/docs/ReleaseNotes.rst | 6 +
clang/include/clang/Basic/Attr.td | 13 +
clang/include/clang/Basic/AttrDocs.td | 15 +
.../clang/Basic/DiagnosticDriverKinds.td | 10 +
.../clang/Basic/DiagnosticSemaKinds.td | 2 +
clang/include/clang/Basic/Features.def | 2 +
clang/include/clang/Basic/LangOptions.h | 154 ++++++
clang/include/clang/Basic/TargetInfo.h | 6 +
clang/include/clang/Driver/Options.td | 8 +
clang/include/clang/Parse/Parser.h | 5 +
clang/lib/Basic/LangOptions.cpp | 19 +
clang/lib/Basic/Targets/AMDGPU.cpp | 8 +
clang/lib/CodeGen/CGStmt.cpp | 5 +
clang/lib/CodeGen/CodeGenFunction.h | 17 +
clang/lib/CodeGen/CodeGenModule.cpp | 3 +-
clang/lib/CodeGen/CodeGenModule.h | 8 +
clang/lib/CodeGen/Targets/AMDGPU.cpp | 20 +-
clang/lib/Driver/ToolChains/Clang.cpp | 62 +++
clang/lib/Parse/ParseDecl.cpp | 65 +++
clang/lib/Sema/SemaStmtAttr.cpp | 41 ++
clang/test/AST/ast-dump-atomic-options.hip | 102 ++++
.../test/CodeGen/AMDGPU/amdgpu-atomic-float.c | 247 ++++------
clang/test/CodeGenCUDA/amdgpu-atomic-ops.cu | 195 ++++----
clang/test/CodeGenCUDA/atomic-ops.cu | 200 ++++----
clang/test/CodeGenCUDA/atomic-options.hip | 456 ++++++++++++++++++
clang/test/CodeGenOpenCL/atomic-ops.cl | 20 +-
clang/test/Driver/atomic-options.hip | 31 ++
.../test/OpenMP/amdgpu-unsafe-fp-atomics.cpp | 10 +-
clang/test/Parser/Inputs/cuda.h | 54 +++
clang/test/Parser/atomic-options.hip | 34 ++
31 files changed, 1572 insertions(+), 391 deletions(-)
create mode 100644 clang/test/AST/ast-dump-atomic-options.hip
create mode 100644 clang/test/CodeGenCUDA/atomic-options.hip
create mode 100644 clang/test/Driver/atomic-options.hip
create mode 100644 clang/test/Parser/Inputs/cuda.h
create mode 100644 clang/test/Parser/atomic-options.hip
diff --git a/clang/docs/LanguageExtensions.rst b/clang/docs/LanguageExtensions.rst
index dd7513ceaa4e4b..a14e4a46cd9cd4 100644
--- a/clang/docs/LanguageExtensions.rst
+++ b/clang/docs/LanguageExtensions.rst
@@ -5404,6 +5404,151 @@ third argument, can only occur at file scope.
a = b[i] * c[i] + e;
}
+Extensions for controlling atomic code generation
+=================================================
+
+The ``[[clang::atomic]]`` statement attribute enables users to control how
+atomic operations are lowered in LLVM IR by conveying additional metadata to
+the backend. The primary goal is to allow users to specify certain options,
+like ignoring floating-point denormal modes, or restricting which memory
+regions can be used, without affecting the correctness of code that does not
+rely on these behaviors.
+
+In LLVM, lowering of atomic operations (e.g., ``atomicrmw``) can differ based on
+the target's capabilities. Some backends support native atomic instructions
+only for certain operation types or alignments, or only in specific memory
+regions. Likewise, floating-point atomic instructions may or may not respect
+IEEE denormal requirements. When the user is unconcerned about denormal-mode
+compliance (for performance reasons) or knows that certain atomic operations
+will not function in a particular memory space, extra hints are needed to
+tell the backend how to proceed.
+
+A classic example is an architecture where floating-point atomic add does not
+fully conform to IEEE denormal-mode handling. If the user does not mind ignoring
+that aspect, they would prefer to still emit a faster hardware atomic
+instruction, rather than a fallback or CAS loop. Conversely, on certain GPUs
+(e.g., AMDGPU), memory accessed via PCIe may only support a subset of atomic
+operations (e.g., integer add, exchange, or compare-and-swap). To ensure correct
+and efficient lowering, the compiler must know whether the user wants to prevent
+the use of these instructions.
+
+Because this is metadata for atomic instructions and can be dropped if the
+backend does not support it, it does not affect correctness (the program's
+behavior remains correct if the metadata is ignored), but it can significantly
+improve performance or guide code generation in the cases that matter.
+
+The attribute may be applied only to a compound statement and looks like:
+
+.. code-block:: c++
+
+ [[clang::atomic(no_remote_memory, !no_fine_grained_memory, ignore_denormal_mode)]]
+ {
+ // Atomic instructions in this block carry extra metadata reflecting
+ // these user-specified options.
+ }
+
+You can provide one or more of these options, each optionally prefixed with
+``!`` to negate that option. The currently supported options are:
+
+``no_remote_memory``
+ Indicates that atomic operations in this block can assume the accessed memory
+ is not remote memory (relevant for certain GPU memory spaces, like those
+ accessed via PCIe). The prefix ``!`` disables this if a broader scope or
+ command-line had it on.
+
+``no_fine_grained_memory``
+ Suggests that no fine-grained memory regions are accessed by atomic
+ operations. On some GPUs, this can assure certain atomic instructions
+ to work, instead of falling back to CAS loops.
+
+``ignore_denormal_mode``
+ Allows the backend to ignore floating-point denormals for atomic instructions
+ (like 32-bit ``fadd``). This often improves performance on architectures that
+ do not handle denormals efficiently.
+
+Any unspecified option is inherited from the global defaults, which can be set
+by a compiler flag (described below) or the target's built-in defaults.
+
+.. code-block:: c++
+
+ // Suppose we globally set: -fatomic=no_remote_memory:on,no_fine_grained_memory:off
+ // (meaning we do not allow remote memory usage but do allow fine-grained memory)
+
+ void example() {
+ // Locally override the "no_remote_memory" setting to *off* for this block,
+ // but turn on "no_fine_grained_memory".
+ [[clang::atomic(!no_remote_memory, no_fine_grained_memory)]] {
+ // This compound statement's atomic ops behave differently than outside:
+ // - no_remote_memory is disabled here
+ // - no_fine_grained_memory is enabled
+ // - ignore_denormal_mode remains unchanged from global default
+ // ...
+ }
+ }
+
+A new compiler option, ``-fatomic=<key>:<value>[,<key2>:<value2>...]``, can
+globally override the target's defaults for these atomic-lowering options.
+For instance:
+
+.. code-block:: console
+
+ $ clang -fatomic=no_remote_memory:on,ignore_denormal_mode:on file.cpp
+
+Each key can be one of:
+``no_remote_memory``, ``no_fine_grained_memory``, or ``ignore_denormal_mode``.
+Each value can be ``on``, ``off``, ``yes``, ``no``, ``true``, ``false``, ``1``,
+or ``0`` (case-insensitive). Unrecognized options or invalid formats produce an
+error. Code using the ``[[clang::atomic]]`` attribute then selectively overrides
+the command-line defaults on a per-block basis.
+
+Function bodies are not compound statements, so this will not work:
+
+.. code-block:: c++
+
+ void func() [[clang::atomic(no_remote_memory)]] { // Wrong: applies to function type
+ }
+
+Use the attribute on a compound statement within the function:
+
+.. code-block:: c++
+
+ void func() {
+ [[clang::atomic(no_remote_memory)]] { // Correct usage
+ // Atomic operations in this block
+ }
+ }
+
+The ``[[clang::atomic]]`` attribute affects only the code generation of atomic
+instructions within the annotated compound statement. Clang attaches
+target-specific metadata to those atomic instructions in the emitted LLVM IR
+to guide backend lowering. This metadata is fixed at the clang code generation
+phase and is not modified by later LLVM passes (such as function inlining).
+
+For example, consider:
+
+.. code-block:: cpp
+ inline void func() {
+ [[clang::atomic(no_remote_memory)]] {
+ // Atomic instructions lowered with metadata.
+ }
+ }
+
+ void foo() {
+ [[clang::atomic(!no_remote_memory)]] {
+ func(); // Inlined by LLVM, but the metadata from 'func()' remains unchanged.
+ }
+ }
+
+Similarly, when used within an operator<=>, the attribute only affects the
+atomic instructions inside the annotated compound statement. It does not alter
+the semantics of the generated comparison or equality operators.
+
+Although current usage focuses on AMDGPU, the mechanism is general. Other
+backends can ignore or implement their own responses to these flags if desired.
+If a target does not understand or enforce these hints, the IR remains valid,
+and the resulting program is still correct (although potentially less optimized
+for that user's needs).
+
Specifying an attribute for multiple declarations (#pragma clang attribute)
===========================================================================
diff --git a/clang/docs/ReleaseNotes.rst b/clang/docs/ReleaseNotes.rst
index a220e57d0b3222..499407f9fe70cb 100644
--- a/clang/docs/ReleaseNotes.rst
+++ b/clang/docs/ReleaseNotes.rst
@@ -110,6 +110,12 @@ Removed Compiler Flags
Attribute Changes in Clang
--------------------------
+- Introduced a new statement attribute ``[[clang::atomic]]`` that enables
+ fine-grained control over atomic code generation on a per-statement basis.
+ Supported options include ``no_remote_memory``, ``no_fine_grained_memory``,
+ and ``ignore_denormal_mode``, particularly relevant for AMDGPU targets,
+ where they map to corresponding IR metadata.
+
Improvements to Clang's diagnostics
-----------------------------------
diff --git a/clang/include/clang/Basic/Attr.td b/clang/include/clang/Basic/Attr.td
index f4ba2bc3c6de31..9fcfed7e4d1f9e 100644
--- a/clang/include/clang/Basic/Attr.td
+++ b/clang/include/clang/Basic/Attr.td
@@ -4972,3 +4972,16 @@ def NoTrivialAutoVarInit: InheritableAttr {
let Documentation = [NoTrivialAutoVarInitDocs];
let SimpleHandler = 1;
}
+
+def Atomic : StmtAttr {
+ let Spellings = [Clang<"atomic">];
+ let Args = [VariadicStringArgument<"Options">];
+ let Subjects = SubjectList<[CompoundStmt], ErrorDiag, "compound statements">;
+ let HasCustomParsing = 1;
+ let Documentation = [AtomicDocs];
+ let AdditionalMembers = [{
+ AtomicOptionsOverride AOO;
+ AtomicOptionsOverride getAtomicOptionsOverride() const { return AOO; }
+ void setAtomicOptionsOverride(AtomicOptionsOverride A) { AOO = A; }
+ }];
+}
diff --git a/clang/include/clang/Basic/AttrDocs.td b/clang/include/clang/Basic/AttrDocs.td
index a8b588169725a2..c90af8530a306e 100644
--- a/clang/include/clang/Basic/AttrDocs.td
+++ b/clang/include/clang/Basic/AttrDocs.td
@@ -8068,6 +8068,21 @@ for details.
}];
}
+def AtomicDocs : Documentation {
+ let Category = DocCatStmt;
+ let Content = [{
+The ``atomic`` attribute can be applied to *compound statements* to override or
+further specify the default atomic code-generation behavior, especially on
+targets such as AMDGPU. You can annotate compound statements with options
+to modify how atomic instructions inside that statement are emitted at the IR
+level.
+
+For details, see the documentation for `@atomic
+<http://clang.llvm.org/docs/LanguageExtensions.html#extensions-for-controlling-atomic-code-generation>`_
+
+ }];
+}
+
def ClangRandomizeLayoutDocs : Documentation {
let Category = DocCatDecl;
let Heading = "randomize_layout, no_randomize_layout";
diff --git a/clang/include/clang/Basic/DiagnosticDriverKinds.td b/clang/include/clang/Basic/DiagnosticDriverKinds.td
index 8d599c96eb4fbf..d1d284de2323f3 100644
--- a/clang/include/clang/Basic/DiagnosticDriverKinds.td
+++ b/clang/include/clang/Basic/DiagnosticDriverKinds.td
@@ -305,6 +305,16 @@ def err_drv_invalid_int_value : Error<"invalid integral value '%1' in '%0'">;
def err_drv_invalid_value_with_suggestion : Error<
"invalid value '%1' in '%0', expected one of: %2">;
def err_drv_alignment_not_power_of_two : Error<"alignment is not a power of 2 in '%0'">;
+
+def err_drv_invalid_atomic_option : Error<
+ "invalid argument '%0' to '-fatomic='; %select{"
+ "must be a comma-separated list of key:value pairs|"
+ "key '%2' is not allowed; allowed keys are 'no_fine_grained_memory', "
+ "'no_remote_memory', 'ignore_denormal_mode'|"
+ "value '%3' is invalid; must be boolean (true/false, 1/0, yes/no, on/off)|"
+ "duplicate key '%2'"
+ "}1">;
+
def err_drv_invalid_remap_file : Error<
"invalid option '%0' not of the form <from-file>;<to-file>">;
def err_drv_invalid_gcc_install_dir : Error<"'%0' does not contain a GCC installation">;
diff --git a/clang/include/clang/Basic/DiagnosticSemaKinds.td b/clang/include/clang/Basic/DiagnosticSemaKinds.td
index 00a94eb7a30367..4ecf1837c94dd6 100644
--- a/clang/include/clang/Basic/DiagnosticSemaKinds.td
+++ b/clang/include/clang/Basic/DiagnosticSemaKinds.td
@@ -3294,6 +3294,8 @@ def err_invalid_branch_protection_spec : Error<
"invalid or misplaced branch protection specification '%0'">;
def warn_unsupported_branch_protection_spec : Warning<
"unsupported branch protection specification '%0'">, InGroup<BranchProtection>;
+def err_attribute_invalid_atomic_argument : Error<
+ "invalid argument '%0' to atomic attribute; valid options are: 'no_remote_memory', 'no_fine_grained_memory', 'ignore_denormal_mode' (optionally prefixed with '!')">;
def warn_unsupported_target_attribute
: Warning<"%select{unsupported|duplicate|unknown}0%select{| CPU|"
diff --git a/clang/include/clang/Basic/Features.def b/clang/include/clang/Basic/Features.def
index c82b6d9b5f6c10..e4b01994b9c2d2 100644
--- a/clang/include/clang/Basic/Features.def
+++ b/clang/include/clang/Basic/Features.def
@@ -312,6 +312,8 @@ EXTENSION(datasizeof, LangOpts.CPlusPlus)
FEATURE(cxx_abi_relative_vtable, LangOpts.CPlusPlus && LangOpts.RelativeCXXABIVTables)
+FEATURE(atomic_attributes, true)
+
// CUDA/HIP Features
FEATURE(cuda_noinline_keyword, LangOpts.CUDA)
EXTENSION(cuda_implicit_host_device_templates, LangOpts.CUDA && LangOpts.OffloadImplicitHostDeviceTemplates)
diff --git a/clang/include/clang/Basic/LangOptions.h b/clang/include/clang/Basic/LangOptions.h
index 16c35bcf49339c..253fe3f410a9b3 100644
--- a/clang/include/clang/Basic/LangOptions.h
+++ b/clang/include/clang/Basic/LangOptions.h
@@ -625,6 +625,10 @@ class LangOptions : public LangOptionsBase {
// WebAssembly target.
bool NoWasmOpt = false;
+ /// The default atomic codegen options specified by command line in the
+ /// format of key:{on|off}.
+ std::vector<std::string> AtomicOptionsAsWritten;
+
LangOptions();
/// Set language defaults for the given input language and
@@ -1096,6 +1100,156 @@ inline void FPOptions::applyChanges(FPOptionsOverride FPO) {
*this = FPO.applyOverrides(*this);
}
+enum class AtomicOptionKind {
+ NoRemoteMemory,
+ NoFineGrainedMemory,
+ IgnoreDenormalMode,
+ LANGOPT_ATOMIC_OPTION_LAST
+};
+
+struct AtomicOptions {
+ unsigned no_remote_memory : 1;
+ unsigned no_fine_grained_memory : 1;
+ unsigned ignore_denormal_mode : 1;
+
+ AtomicOptions()
+ : no_remote_memory(0), no_fine_grained_memory(0),
+ ignore_denormal_mode(0) {}
+
+ bool getOption(AtomicOptionKind Kind) const {
+ switch (Kind) {
+ case AtomicOptionKind::NoRemoteMemory:
+ return no_remote_memory;
+ case AtomicOptionKind::NoFineGrainedMemory:
+ return no_fine_grained_memory;
+ case AtomicOptionKind::IgnoreDenormalMode:
+ return ignore_denormal_mode;
+ default:
+ llvm_unreachable("Invalid AtomicOptionKind");
+ }
+ }
+
+ void setOption(AtomicOptionKind Kind, bool Value) {
+ switch (Kind) {
+ case AtomicOptionKind::NoRemoteMemory:
+ no_remote_memory = Value;
+ break;
+ case AtomicOptionKind::NoFineGrainedMemory:
+ no_fine_grained_memory = Value;
+ break;
+ case AtomicOptionKind::IgnoreDenormalMode:
+ ignore_denormal_mode = Value;
+ break;
+ default:
+ llvm_unreachable("Invalid AtomicOptionKind");
+ }
+ }
+
+ LLVM_DUMP_METHOD void dump() const {
+ llvm::errs() << "\n no_remote_memory: " << no_remote_memory
+ << "\n no_fine_grained_memory: " << no_fine_grained_memory
+ << "\n ignore_denormal_mode: " << ignore_denormal_mode << "\n";
+ }
+
+ static constexpr const char *OptionNames[] = {
+ "no_remote_memory", "no_fine_grained_memory", "ignore_denormal_mode"};
+};
+
+/// Represents differences (overrides) between two AtomicOptions values.
+class AtomicOptionsOverride {
+ AtomicOptions Options;
+ // Bitmask for active overrides.
+ unsigned OverrideMask;
+
+ static constexpr unsigned getMask(AtomicOptionKind Kind) {
+ return 1u << static_cast<unsigned>(Kind);
+ }
+
+public:
+ template <typename Func> static void forEachOption(Func &&func) {
+ for (unsigned I = 0; I < static_cast<unsigned>(
+ AtomicOptionKind::LANGOPT_ATOMIC_OPTION_LAST);
+ ++I)
+ func(static_cast<AtomicOptionKind>(I));
+ }
+
+ static constexpr unsigned DefaultMask =
+ (1u << static_cast<unsigned>(
+ AtomicOptionKind::LANGOPT_ATOMIC_OPTION_LAST)) -
+ 1;
+
+ AtomicOptionsOverride() : Options(), OverrideMask(0) {}
+ AtomicOptionsOverride(const LangOptions &LO);
+ AtomicOptionsOverride(AtomicOptions AO)
+ : Options(AO), OverrideMask(DefaultMask) {}
+ AtomicOptionsOverride(AtomicOptions AO, unsigned Mask)
+ : Options(AO), OverrideMask(Mask) {}
+
+ static std::optional<AtomicOptionKind> parseAtomicOverrideKey(StringRef Key) {
+ std::optional<AtomicOptionKind> Result;
+ forEachOption([&](AtomicOptionKind Kind) {
+ if (Key == AtomicOptions::OptionNames[static_cast<unsigned>(Kind)])
+ Result = Kind;
+ });
+ return Result;
+ }
+
+ AtomicOptions applyOverrides(const AtomicOptions &Base) const {
+ AtomicOptions Result = Base;
+ forEachOption([this, &Result](AtomicOptionKind Kind) {
+ if (hasOverride(Kind))
+ Result.setOption(Kind, Options.getOption(Kind));
+ });
+ return Result;
+ }
+
+ bool operator==(const AtomicOptionsOverride &Other) const {
+ bool Equal = true;
+ forEachOption([this, &Other, &Equal](AtomicOptionKind Kind) {
+ if (Options.getOption(Kind) != Other.Options.getOption(Kind))
+ Equal = false;
+ });
+ return Equal && (OverrideMask == Other.OverrideMask);
+ }
+ bool operator!=(const AtomicOptionsOverride &Other) const {
+ return !(*this == Other);
+ }
+
+ bool hasOverride(AtomicOptionKind Kind) const {
+ return (OverrideMask & getMask(Kind)) != 0;
+ }
+
+ bool getOverride(AtomicOptionKind Kind) const {
+ assert(hasOverride(Kind));
+ return Options.getOption(Kind);
+ }
+
+ void setOverride(AtomicOptionKind Kind, bool value) {
+ Options.setOption(Kind, value);
+ OverrideMask |= getMask(Kind);
+ }
+
+ void clearOverride(AtomicOptionKind Kind) {
+ Options.setOption(Kind, false);
+ OverrideMask &= ~getMask(Kind);
+ }
+
+ LLVM_DUMP_METHOD void dump() const {
+ llvm::errs() << "\n AtomicOptionsOverride dump:";
+ forEachOption([this](AtomicOptionKind Kind) {
+ if (hasOverride(Kind))
+ llvm::errs() << "\n "
+ << AtomicOptions::OptionNames[static_cast<unsigned>(Kind)]
+ << " override: " << Options.getOption(Kind);
+ });
+ llvm::errs() << "\n";
+ }
+
+ void setAtomicOverride(AtomicOptionKind Kind, bool Value) {
+ setOverride(Kind, Value);
+ }
+};
+
/// Describes the kind of translation unit being processed.
enum TranslationUnitKind {
/// The translation unit is a complete translation unit.
diff --git a/clang/include/clang/Basic/TargetInfo.h b/clang/include/clang/Basic/TargetInfo.h
index d762144478b489..05be4c6c7c420e 100644
--- a/clang/include/clang/Basic/TargetInfo.h
+++ b/clang/include/clang/Basic/TargetInfo.h
@@ -295,6 +295,9 @@ class TargetInfo : public TransferrableTargetInfo,
// in function attributes in IR.
llvm::StringSet<> ReadOnlyFeatures;
+ // Default atomic options
+ AtomicOptions AtomicOpts;
+
public:
/// Construct a target for the given options.
///
@@ -1688,6 +1691,9 @@ class TargetInfo : public TransferrableTargetInfo,
return CC_C;
}
+ /// Get the default atomic options.
+ AtomicOptions getAtomicOpts() const { return AtomicOpts; }
+
enum CallingConvCheckResult {
CCCR_OK,
CCCR_Warning,
diff --git a/clang/include/clang/Driver/Options.td b/clang/include/clang/Driver/Options.td
index d8123cc39fdc95..056691d85b0534 100644
--- a/clang/include/clang/Driver/Options.td
+++ b/clang/include/clang/Driver/Options.td
@@ -2260,6 +2260,14 @@ def fsymbol_partition_EQ : Joined<["-"], "fsymbol-partition=">, Group<f_Group>,
Visibility<[ClangOption, CC1Option]>,
MarshallingInfoString<CodeGenOpts<"SymbolPartition">>;
+def fatomic_EQ : CommaJoined<["-"], "fatomic=">, Group<f_Group>,
+ Visibility<[ClangOption, CC1Option]>,
+ HelpText<"Specify codegen options for atomic operations as a comma-separated "
+ "list of key:value pairs. Allowed keys are no_fine_grained_memory, "
+ "no_remote_memory, and ignore_denormal_mode. Each key accepts "
+ "values on/off/yes/no/1/0/true/false">,
+ MarshallingInfoStringVector<LangOpts<"AtomicOptionsAsWritten">>;
+
defm memory_profile : OptInCC1FFlag<"memory-profile", "Enable", "Disable", " heap memory profiling">;
def fmemory_profile_EQ : Joined<["-"], "fmemory-profile=">,
Group<f_Group>, Visibility<[ClangOption, CC1Option]>,
diff --git a/clang/include/clang/Parse/Parser.h b/clang/include/clang/Parse/Parser.h
index e99d2cf2eaa409..61cb50378a980a 100644
--- a/clang/include/clang/Parse/Parser.h
+++ b/clang/include/clang/Parse/Parser.h
@@ -3101,6 +3101,11 @@ class Parser : public CodeCompletionHandler {
std::optional<AvailabilitySpec> ParseAvailabilitySpec();
ExprResult ParseAvailabilityCheckExpr(SourceLocation StartLoc);
+ void ParseAtomicAttribute(IdentifierInfo &AttrName,
+ SourceLocation AttrNameLoc, ParsedAttributes &Attrs,
+ SourceLocation *EndLoc, IdentifierInfo *ScopeName,
+ SourceLocation ScopeLoc, ParsedAttr::Form Form);
+
void ParseExternalSourceSymbolAttribute(IdentifierInfo &ExternalSourceSymbol,
SourceLocation Loc,
ParsedAttributes &Attrs,
diff --git a/clang/lib/Basic/LangOptions.cpp b/clang/lib/Basic/LangOptions.cpp
index e3037ec819add2..0f3bfa9794b837 100644
--- a/clang/lib/Basic/LangOptions.cpp
+++ b/clang/lib/Basic/LangOptions.cpp
@@ -240,3 +240,22 @@ LLVM_DUMP_METHOD void FPOptionsOverride::dump() {
#include "clang/Basic/FPOptions.def"
llvm::errs() << "\n";
}
+
+static bool isTruthy(StringRef Value) {
+ return Value.equals_insensitive("on") || Value.equals_insensitive("true") ||
+ Value.equals_insensitive("1") || Value.equals_insensitive("yes");
+}
+
+AtomicOptionsOverride::AtomicOptionsOverride(const LangOptions &LO)
+ : Options(), OverrideMask(0) {
+ for (const auto &Setting : LO.AtomicOptionsAsWritten) {
+ SmallVector<StringRef, 2> KeyValue;
+ StringRef(Setting).split(KeyValue, ":");
+ // Assuming option string has been checked elsewhere and is valid.
+ assert(KeyValue.size() == 2 && "Invalid atomic option format");
+ if (auto Kind = parseAtomicOverrideKey(KeyValue[0])) {
+ bool IsEnabled = isTruthy(KeyValue[1]);
+ setAtomicOverride(*Kind, IsEnabled);
+ }
+ }
+}
diff --git a/clang/lib/Basic/Targets/AMDGPU.cpp b/clang/lib/Basic/Targets/AMDGPU.cpp
index 0d308cb6af9699..36d8340d476f11 100644
--- a/clang/lib/Basic/Targets/AMDGPU.cpp
+++ b/clang/lib/Basic/Targets/AMDGPU.cpp
@@ -242,6 +242,12 @@ AMDGPUTargetInfo::AMDGPUTargetInfo(const llvm::Triple &Triple,
WavefrontSize = (GPUFeatures & llvm::AMDGPU::FEATURE_WAVE32) ? 32 : 64;
AllowAMDGPUUnsafeFPAtomics = Opts.AllowAMDGPUUnsafeFPAtomics;
+ // Set the default atomic options
+ AtomicOpts.setOption(clang::AtomicOptionKind::NoRemoteMemory, true);
+ AtomicOpts.setOption(clang::AtomicOptionKind::NoFineGrainedMemory, true);
+ AtomicOpts.setOption(clang::AtomicOptionKind::IgnoreDenormalMode,
+ Opts.AllowAMDGPUUnsafeFPAtomics);
+
// Set pointer width and alignment for the generic address space.
PointerWidth = PointerAlign = getPointerWidthV(LangAS::Default);
if (getMaxPointerWidth() == 64) {
@@ -265,6 +271,8 @@ void AMDGPUTargetInfo::adjust(DiagnosticsEngine &Diags, LangOptions &Opts) {
// to OpenCL can be removed from the following line.
setAddressSpaceMap((Opts.OpenCL && !Opts.OpenCLGenericAddressSpace) ||
!isAMDGCN(getTriple()));
+
+ AtomicOpts = AtomicOptionsOverride(Opts).applyOverrides(AtomicOpts);
}
ArrayRef<Builtin::Info> AMDGPUTargetInfo::getTargetBuiltins() const {
diff --git a/clang/lib/CodeGen/CGStmt.cpp b/clang/lib/CodeGen/CGStmt.cpp
index 7c944fe85a352d..23f49859f84ce9 100644
--- a/clang/lib/CodeGen/CGStmt.cpp
+++ b/clang/lib/CodeGen/CGStmt.cpp
@@ -781,6 +781,7 @@ void CodeGenFunction::EmitAttributedStmt(const AttributedStmt &S) {
HLSLControlFlowHintAttr::Spelling flattenOrBranch =
HLSLControlFlowHintAttr::SpellingNotCalculated;
const CallExpr *musttail = nullptr;
+ AtomicOptionsOverride AOO;
for (const auto *A : S.getAttrs()) {
switch (A->getKind()) {
@@ -811,6 +812,9 @@ void CodeGenFunction::EmitAttributedStmt(const AttributedStmt &S) {
Builder.CreateAssumption(AssumptionVal);
}
} break;
+ case attr::Atomic: {
+ AOO = cast<AtomicAttr>(A)->getAtomicOptionsOverride();
+ } break;
case attr::HLSLControlFlowHint: {
flattenOrBranch = cast<HLSLControlFlowHintAttr>(A)->getSemanticSpelling();
} break;
@@ -822,6 +826,7 @@ void CodeGenFunction::EmitAttributedStmt(const AttributedStmt &S) {
SaveAndRestore save_noconvergent(InNoConvergentAttributedStmt, noconvergent);
SaveAndRestore save_musttail(MustTailCall, musttail);
SaveAndRestore save_flattenOrBranch(HLSLControlFlowAttr, flattenOrBranch);
+ CGAtomicOptionsRAII AORAII(CGM, AOO);
EmitStmt(S.getSubStmt(), S.getAttrs());
}
diff --git a/clang/lib/CodeGen/CodeGenFunction.h b/clang/lib/CodeGen/CodeGenFunction.h
index 670dfce27f6a26..93db2910158a67 100644
--- a/clang/lib/CodeGen/CodeGenFunction.h
+++ b/clang/lib/CodeGen/CodeGenFunction.h
@@ -829,6 +829,23 @@ class CodeGenFunction : public CodeGenTypeCache {
};
FPOptions CurFPFeatures;
+ class CGAtomicOptionsRAII {
+ public:
+ CGAtomicOptionsRAII(CodeGenModule &CGM_, AtomicOptions AO)
+ : CGM(CGM_), SavedAtomicOpts(CGM.getAtomicOpts()) {
+ CGM.setAtomicOpts(AO);
+ }
+ CGAtomicOptionsRAII(CodeGenModule &CGM_, AtomicOptionsOverride AOO)
+ : CGM(CGM_), SavedAtomicOpts(CGM.getAtomicOpts()) {
+ CGM.setAtomicOpts(AOO.applyOverrides(SavedAtomicOpts));
+ }
+ ~CGAtomicOptionsRAII() { CGM.setAtomicOpts(SavedAtomicOpts); }
+
+ private:
+ CodeGenModule &CGM;
+ AtomicOptions SavedAtomicOpts;
+ };
+
public:
/// ObjCEHValueStack - Stack of Objective-C exception values, used for
/// rethrows.
diff --git a/clang/lib/CodeGen/CodeGenModule.cpp b/clang/lib/CodeGen/CodeGenModule.cpp
index 05879cd486a8c9..2892e4a2defc3e 100644
--- a/clang/lib/CodeGen/CodeGenModule.cpp
+++ b/clang/lib/CodeGen/CodeGenModule.cpp
@@ -342,7 +342,8 @@ CodeGenModule::CodeGenModule(ASTContext &C,
PreprocessorOpts(PPO), CodeGenOpts(CGO), TheModule(M), Diags(diags),
Target(C.getTargetInfo()), ABI(createCXXABI(*this)),
VMContext(M.getContext()), VTables(*this), StackHandler(diags),
- SanitizerMD(new SanitizerMetadata(*this)) {
+ SanitizerMD(new SanitizerMetadata(*this)),
+ AtomicOpts(Target.getAtomicOpts()) {
// Initialize the type cache.
Types.reset(new CodeGenTypes(*this));
diff --git a/clang/lib/CodeGen/CodeGenModule.h b/clang/lib/CodeGen/CodeGenModule.h
index 0956296e2d5d86..c8dab554abb364 100644
--- a/clang/lib/CodeGen/CodeGenModule.h
+++ b/clang/lib/CodeGen/CodeGenModule.h
@@ -676,6 +676,8 @@ class CodeGenModule : public CodeGenTypeCache {
std::optional<PointerAuthQualifier>
computeVTPointerAuthentication(const CXXRecordDecl *ThisClass);
+ AtomicOptions AtomicOpts;
+
public:
CodeGenModule(ASTContext &C, IntrusiveRefCntPtr<llvm::vfs::FileSystem> FS,
const HeaderSearchOptions &headersearchopts,
@@ -691,6 +693,12 @@ class CodeGenModule : public CodeGenTypeCache {
/// Finalize LLVM code generation.
void Release();
+ /// Get the current Atomic options.
+ AtomicOptions getAtomicOpts() { return AtomicOpts; }
+
+ /// Set the current Atomic options.
+ void setAtomicOpts(AtomicOptions AO) { AtomicOpts = AO; }
+
/// Return true if we should emit location information for expressions.
bool getExpressionLocationsEnabled() const;
diff --git a/clang/lib/CodeGen/Targets/AMDGPU.cpp b/clang/lib/CodeGen/Targets/AMDGPU.cpp
index 788eac5f28231e..977e60b323b155 100644
--- a/clang/lib/CodeGen/Targets/AMDGPU.cpp
+++ b/clang/lib/CodeGen/Targets/AMDGPU.cpp
@@ -584,19 +584,19 @@ void AMDGPUTargetCodeGenInfo::setTargetAtomicMetadata(
AtomicInst.setMetadata(llvm::LLVMContext::MD_noalias_addrspace, ASRange);
}
- if (!RMW || !CGF.getTarget().allowAMDGPUUnsafeFPAtomics())
+ if (!RMW)
return;
- // TODO: Introduce new, more controlled options that also work for integers,
- // and deprecate allowAMDGPUUnsafeFPAtomics.
- llvm::AtomicRMWInst::BinOp RMWOp = RMW->getOperation();
- if (llvm::AtomicRMWInst::isFPOperation(RMWOp)) {
- llvm::MDNode *Empty = llvm::MDNode::get(CGF.getLLVMContext(), {});
+ AtomicOptions AO = CGF.CGM.getAtomicOpts();
+ llvm::MDNode *Empty = llvm::MDNode::get(CGF.getLLVMContext(), {});
+ if (AO.getOption(clang::AtomicOptionKind::NoFineGrainedMemory))
RMW->setMetadata("amdgpu.no.fine.grained.memory", Empty);
-
- if (RMWOp == llvm::AtomicRMWInst::FAdd && RMW->getType()->isFloatTy())
- RMW->setMetadata("amdgpu.ignore.denormal.mode", Empty);
- }
+ if (AO.getOption(clang::AtomicOptionKind::NoRemoteMemory))
+ RMW->setMetadata("amdgpu.no.remote.memory", Empty);
+ if (AO.getOption(clang::AtomicOptionKind::IgnoreDenormalMode) &&
+ RMW->getOperation() == llvm::AtomicRMWInst::FAdd &&
+ RMW->getType()->isFloatTy())
+ RMW->setMetadata("amdgpu.ignore.denormal.mode", Empty);
}
bool AMDGPUTargetCodeGenInfo::shouldEmitStaticExternCAliases() const {
diff --git a/clang/lib/Driver/ToolChains/Clang.cpp b/clang/lib/Driver/ToolChains/Clang.cpp
index 9b5132c5625faa..3311f7ffe3e4f9 100644
--- a/clang/lib/Driver/ToolChains/Clang.cpp
+++ b/clang/lib/Driver/ToolChains/Clang.cpp
@@ -72,6 +72,16 @@ using namespace clang::driver::tools;
using namespace clang;
using namespace llvm::opt;
+static bool isTruthy(StringRef Value) {
+ return Value.equals_insensitive("on") || Value.equals_insensitive("true") ||
+ Value.equals_insensitive("1") || Value.equals_insensitive("yes");
+}
+
+static bool isFalsy(StringRef Value) {
+ return Value.equals_insensitive("off") || Value.equals_insensitive("false") ||
+ Value.equals_insensitive("0") || Value.equals_insensitive("no");
+}
+
static void CheckPreprocessingOptions(const Driver &D, const ArgList &Args) {
if (Arg *A = Args.getLastArg(clang::driver::options::OPT_C, options::OPT_CC,
options::OPT_fminimize_whitespace,
@@ -4964,6 +4974,56 @@ static void ProcessVSRuntimeLibrary(const ToolChain &TC, const ArgList &Args,
CmdArgs.push_back("--dependent-lib=softintrin");
}
+// Helper function to render the -fatomic= options.
+static void renderAtomicOptions(const ArgList &Args, const Driver &D,
+ SmallVectorImpl<const char *> &CmdArgs) {
+ if (Arg *AtomicArg = Args.getLastArg(options::OPT_fatomic_EQ)) {
+ if (!AtomicArg->getNumValues()) {
+ D.Diag(clang::diag::warn_drv_empty_joined_argument)
+ << AtomicArg->getAsString(Args);
+ return;
+ }
+
+ bool Valid = true;
+ std::set<StringRef> Keys;
+ for (StringRef Option : AtomicArg->getValues()) {
+ SmallVector<StringRef, 2> KeyValue;
+ Option.split(KeyValue, ":");
+ // Check for valid "key:value" format.
+ if (KeyValue.size() != 2) {
+ D.Diag(diag::err_drv_invalid_atomic_option) << Option << 0 << ""
+ << "";
+ Valid = false;
+ break;
+ }
+ StringRef Key = KeyValue[0];
+ StringRef Value = KeyValue[1];
+ // Validate key.
+ if (Key != "no_fine_grained_memory" && Key != "no_remote_memory" &&
+ Key != "ignore_denormal_mode") {
+ D.Diag(diag::err_drv_invalid_atomic_option) << Option << 1 << Key << "";
+ Valid = false;
+ break;
+ }
+ // Validate value.
+ if (!isTruthy(Value) && !isFalsy(Value)) {
+ D.Diag(diag::err_drv_invalid_atomic_option)
+ << Option << 2 << "" << Value;
+ Valid = false;
+ break;
+ }
+ // Check for duplicate keys.
+ if (!Keys.insert(Key).second) {
+ D.Diag(diag::err_drv_invalid_atomic_option) << Option << 3 << Key << "";
+ Valid = false;
+ break;
+ }
+ }
+ if (Valid)
+ CmdArgs.push_back(Args.MakeArgString(AtomicArg->getAsString(Args)));
+ }
+}
+
void Clang::ConstructJob(Compilation &C, const JobAction &JA,
const InputInfo &Output, const InputInfoList &Inputs,
const ArgList &Args, const char *LinkingOutput) const {
@@ -5958,6 +6018,8 @@ void Clang::ConstructJob(Compilation &C, const JobAction &JA,
RenderFloatingPointOptions(TC, D, OFastEnabled, Args, CmdArgs, JA);
+ renderAtomicOptions(Args, D, CmdArgs);
+
if (Arg *A = Args.getLastArg(options::OPT_fextend_args_EQ)) {
const llvm::Triple::ArchType Arch = TC.getArch();
if (Arch == llvm::Triple::x86 || Arch == llvm::Triple::x86_64) {
diff --git a/clang/lib/Parse/ParseDecl.cpp b/clang/lib/Parse/ParseDecl.cpp
index 75b5e11f8327ca..c91d8400cbec00 100644
--- a/clang/lib/Parse/ParseDecl.cpp
+++ b/clang/lib/Parse/ParseDecl.cpp
@@ -764,6 +764,67 @@ void Parser::ParseGNUAttributeArgs(
ScopeLoc, Form);
}
+void Parser::ParseAtomicAttribute(
+ IdentifierInfo &AttrName, SourceLocation AttrNameLoc,
+ ParsedAttributes &Attrs, SourceLocation *EndLoc, IdentifierInfo *ScopeName,
+ SourceLocation ScopeLoc, ParsedAttr::Form Form) {
+ BalancedDelimiterTracker T(*this, tok::l_paren);
+ if (T.expectAndConsume())
+ return;
+
+ // Lambda to parse a single option, handling an optional '!' prefix.
+ auto parseOption = [&]() -> IdentifierLoc * {
+ bool isNegated = false;
+ if (Tok.is(tok::exclaim)) {
+ isNegated = true;
+ ConsumeToken();
+ }
+ if (!Tok.is(tok::identifier)) {
+ Diag(Tok.getLocation(), diag::err_expected) << tok::identifier;
+ return nullptr;
+ }
+ IdentifierLoc *IL = ParseIdentifierLoc();
+ if (isNegated) {
+ std::string negName = "!" + IL->Ident->getName().str();
+ IL->Ident = &Actions.getPreprocessor().getIdentifierTable().get(negName);
+ }
+ return IL;
+ };
+
+ SmallVector<IdentifierLoc *, 4> Identifiers;
+
+ // Parse the first option.
+ if (IdentifierLoc *Opt = parseOption()) {
+ Identifiers.push_back(Opt);
+ } else {
+ SkipUntil(tok::r_paren, StopAtSemi);
+ return;
+ }
+
+ // Parse any additional options.
+ while (TryConsumeToken(tok::comma)) {
+ if (IdentifierLoc *Opt = parseOption()) {
+ Identifiers.push_back(Opt);
+ } else {
+ SkipUntil(tok::r_paren, StopAtSemi);
+ return;
+ }
+ }
+
+ if (T.consumeClose())
+ return;
+
+ if (EndLoc)
+ *EndLoc = T.getCloseLocation();
+
+ SmallVector<ArgsUnion, 4> Args;
+ for (auto *IL : Identifiers)
+ Args.push_back(IL);
+
+ Attrs.addNew(&AttrName, SourceRange(AttrNameLoc, T.getCloseLocation()),
+ ScopeName, ScopeLoc, Args.data(), Args.size(), Form);
+}
+
unsigned Parser::ParseClangAttributeArgs(
IdentifierInfo *AttrName, SourceLocation AttrNameLoc,
ParsedAttributes &Attrs, SourceLocation *EndLoc, IdentifierInfo *ScopeName,
@@ -801,6 +862,10 @@ unsigned Parser::ParseClangAttributeArgs(
case ParsedAttr::AT_CXXAssume:
ParseCXXAssumeAttributeArg(Attrs, AttrName, AttrNameLoc, EndLoc, Form);
break;
+ case ParsedAttr::AT_Atomic:
+ ParseAtomicAttribute(*AttrName, AttrNameLoc, Attrs, EndLoc, ScopeName,
+ ScopeLoc, Form);
+ break;
}
return !Attrs.empty() ? Attrs.begin()->getNumArgs() : 0;
}
diff --git a/clang/lib/Sema/SemaStmtAttr.cpp b/clang/lib/Sema/SemaStmtAttr.cpp
index 422d8abc1028aa..ce68b239acff6a 100644
--- a/clang/lib/Sema/SemaStmtAttr.cpp
+++ b/clang/lib/Sema/SemaStmtAttr.cpp
@@ -625,6 +625,45 @@ static Attr *handleHLSLControlFlowHint(Sema &S, Stmt *St, const ParsedAttr &A,
return ::new (S.Context) HLSLControlFlowHintAttr(S.Context, A);
}
+static Attr *handleAtomicAttr(Sema &S, Stmt *St, const ParsedAttr &A,
+ SourceRange Range) {
+ if (!isa<CompoundStmt>(St)) {
+ S.Diag(St->getBeginLoc(), diag::err_attribute_wrong_decl_type)
+ << A << "compound statement";
+ return nullptr;
+ }
+
+ SmallVector<StringRef, 4> OptionStrings;
+ AtomicOptionsOverride AOO;
+
+ for (unsigned i = 0; i < A.getNumArgs(); ++i) {
+ IdentifierLoc *Arg = A.getArgAsIdent(i);
+ if (!Arg || !Arg->Ident) {
+ S.Diag(A.getLoc(), diag::err_attribute_argument_type)
+ << A << AANT_ArgumentIdentifier;
+ }
+
+ StringRef Option = Arg->Ident->getName();
+ bool IsNegated = Option.starts_with("!");
+ StringRef CleanOption = IsNegated ? Option.drop_front(1) : Option;
+
+ if (auto Kind =
+ AtomicOptionsOverride::parseAtomicOverrideKey(CleanOption)) {
+ AOO.setAtomicOverride(*Kind, !IsNegated);
+ OptionStrings.push_back(Option);
+ } else {
+ S.Diag(Arg->Loc, diag::err_attribute_invalid_atomic_argument)
+ << Option << A;
+ return nullptr;
+ }
+ }
+
+ auto *AA = ::new (S.Context)
+ AtomicAttr(S.Context, A, OptionStrings.data(), OptionStrings.size());
+ AA->setAtomicOptionsOverride(AOO);
+ return AA;
+}
+
static Attr *ProcessStmtAttribute(Sema &S, Stmt *St, const ParsedAttr &A,
SourceRange Range) {
if (A.isInvalid() || A.getKind() == ParsedAttr::IgnoredAttribute)
@@ -685,6 +724,8 @@ static Attr *ProcessStmtAttribute(Sema &S, Stmt *St, const ParsedAttr &A,
return handleNoConvergentAttr(S, St, A, Range);
case ParsedAttr::AT_Annotate:
return S.CreateAnnotationAttr(A);
+ case ParsedAttr::AT_Atomic:
+ return handleAtomicAttr(S, St, A, Range);
default:
if (Attr *AT = nullptr; A.getInfo().handleStmtAttribute(S, St, A, AT) !=
ParsedAttrInfo::NotHandled) {
diff --git a/clang/test/AST/ast-dump-atomic-options.hip b/clang/test/AST/ast-dump-atomic-options.hip
new file mode 100644
index 00000000000000..e3b2937164a62b
--- /dev/null
+++ b/clang/test/AST/ast-dump-atomic-options.hip
@@ -0,0 +1,102 @@
+// RUN: %clang_cc1 -ast-dump %s | FileCheck %s
+// RUN: %clang_cc1 -ast-dump -fcuda-is-device %s | FileCheck %s
+// RUN: %clang_cc1 -ast-dump -fcuda-is-device %s \
+// RUN: -fatomic=no_fine_grained_memory:off,no_remote_memory:on,ignore_denormal_mode:on \
+// RUN: | FileCheck %s
+
+#include "Inputs/cuda.h"
+
+// CHECK-LABEL: FunctionDecl {{.*}} test_default
+// CHECK-NOT: AttributedStmt
+// CHECK-NOT: AtomicAttr
+// CHECK: CompoundStmt
+// CHECK-NEXT: `-AtomicExpr
+__device__ __host__ void test_default(float *a) {
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+}
+
+// CHECK-LABEL: FunctionDecl {{.*}} test_one
+// CHECK: `-AttributedStmt
+// CHECK-NEXT: |-AtomicAttr {{.*}} no_remote_memory
+// CHECK-NEXT: `-CompoundStmt
+// CHECK-NEXT: `-AtomicExpr
+__device__ __host__ void test_one(float *a) {
+ [[clang::atomic(no_remote_memory)]] {
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ }
+}
+
+// CHECK-LABEL: FunctionDecl {{.*}} test_two
+// CHECK: `-AttributedStmt
+// CHECK-NEXT: |-AtomicAttr {{.*}} !no_remote_memory ignore_denormal_mode
+// CHECK-NEXT: `-CompoundStmt
+// CHECK-NEXT: `-AtomicExpr
+__device__ __host__ void test_two(float *a) {
+ [[clang::atomic(!no_remote_memory, ignore_denormal_mode)]] {
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ }
+}
+
+// CHECK-LABEL: FunctionDecl {{.*}} test_three
+// CHECK: `-AttributedStmt
+// CHECK-NEXT: |-AtomicAttr {{.*}} no_remote_memory !no_fine_grained_memory !ignore_denormal_mode
+// CHECK-NEXT: `-CompoundStmt
+// CHECK-NEXT: `-AtomicExpr
+__device__ __host__ void test_three(float *a) {
+ [[clang::atomic(no_remote_memory, !no_fine_grained_memory, !ignore_denormal_mode)]] {
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ }
+}
+
+// CHECK-LABEL: FunctionDecl {{.*}} test_nested
+// CHECK: CompoundStmt
+// CHECK: |-AtomicExpr
+// CHECK: `-AttributedStmt
+// CHECK-NEXT: |-AtomicAttr {{.*}} !no_remote_memory !no_fine_grained_memory !ignore_denormal_mode
+// CHECK-NEXT: `-CompoundStmt
+// CHECK: |-AtomicExpr
+// CHECK: |-AttributedStmt
+// CHECK-NEXT: |-AtomicAttr {{.*}} no_remote_memory
+// CHECK-NEXT: `-CompoundStmt
+// CHECK-NEXT: `-AtomicExpr
+// CHECK: `-AttributedStmt
+// CHECK-NEXT: |-AtomicAttr {{.*}} no_fine_grained_memory
+// CHECK-NEXT: `-CompoundStmt
+// CHECK-NEXT: `-AtomicExpr
+__device__ __host__ void test_nested(float *a) {
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ [[clang::atomic(!no_remote_memory, !no_fine_grained_memory, !ignore_denormal_mode)]] {
+ __scoped_atomic_fetch_max(a, 2, __ATOMIC_SEQ_CST, __MEMORY_SCOPE_DEVICE);
+ [[clang::atomic(no_remote_memory)]] {
+ __scoped_atomic_fetch_min(a, 3, __ATOMIC_ACQUIRE, __MEMORY_SCOPE_WRKGRP);
+ }
+ [[clang::atomic(no_fine_grained_memory)]] {
+ __scoped_atomic_fetch_sub(a, 4, __ATOMIC_RELEASE, __MEMORY_SCOPE_WVFRNT);
+ }
+ }
+}
+
+// CHECK-LABEL: FunctionTemplateDecl {{.*}} test_template
+// CHECK: |-FunctionDecl {{.*}} test_template 'void (T *)'
+// CHECK: | |-CompoundStmt
+// CHECK: | | `-AttributedStmt
+// CHECK: | | |-AtomicAttr {{.*}} no_remote_memory
+// CHECK: | | `-CompoundStmt
+// CHECK: | | `-CallExpr {{.*}} '<dependent type>'
+// CHECK: `-FunctionDecl {{.*}} used test_template 'void (float *)' implicit_instantiation
+// CHECK: |-CompoundStmt
+// CHECK: | `-AttributedStmt
+// CHECK: | |-AtomicAttr {{.*}} no_remote_memory !no_fine_grained_memory !ignore_denormal_mode
+// CHECK: | `-CompoundStmt
+// CHECK: | `-AtomicExpr {{.*}} 'float'
+template<typename T>
+__device__ __host__ void test_template(T *a) {
+ [[clang::atomic(no_remote_memory, !no_fine_grained_memory, !ignore_denormal_mode)]] {
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ }
+}
+
+__device__ __host__ void test_template_caller() {
+ float *p;
+ test_template(p);
+}
diff --git a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
index a8fb989b64de50..d74470304c69e1 100644
--- a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
+++ b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
@@ -7,7 +7,7 @@
// SAFE-NEXT: [[ENTRY:.*:]]
// SAFE-NEXT: [[RETVAL:%.*]] = alloca float, align 4, addrspace(5)
// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_float_post_inc.n to ptr), float 1.000000e+00 seq_cst, align 4
+// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_float_post_inc.n to ptr), float 1.000000e+00 seq_cst, align 4, !amdgpu.no.fine.grained.memory [[META3:![0-9]+]], !amdgpu.no.remote.memory [[META3]]
// SAFE-NEXT: ret float [[TMP0]]
//
// UNSAFE-LABEL: define dso_local float @test_float_post_inc(
@@ -15,7 +15,7 @@
// UNSAFE-NEXT: [[ENTRY:.*:]]
// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca float, align 4, addrspace(5)
// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_float_post_inc.n to ptr), float 1.000000e+00 seq_cst, align 4, !amdgpu.no.fine.grained.memory [[META3:![0-9]+]], !amdgpu.ignore.denormal.mode [[META3]]
+// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_float_post_inc.n to ptr), float 1.000000e+00 seq_cst, align 4, !amdgpu.no.fine.grained.memory [[META3:![0-9]+]], !amdgpu.no.remote.memory [[META3]], !amdgpu.ignore.denormal.mode [[META3]]
// UNSAFE-NEXT: ret float [[TMP0]]
//
float test_float_post_inc()
@@ -24,21 +24,13 @@ float test_float_post_inc()
return n++;
}
-// SAFE-LABEL: define dso_local float @test_float_post_dc(
-// SAFE-SAME: ) #[[ATTR0]] {
-// SAFE-NEXT: [[ENTRY:.*:]]
-// SAFE-NEXT: [[RETVAL:%.*]] = alloca float, align 4, addrspace(5)
-// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test_float_post_dc.n to ptr), float 1.000000e+00 seq_cst, align 4
-// SAFE-NEXT: ret float [[TMP0]]
-//
-// UNSAFE-LABEL: define dso_local float @test_float_post_dc(
-// UNSAFE-SAME: ) #[[ATTR0]] {
-// UNSAFE-NEXT: [[ENTRY:.*:]]
-// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca float, align 4, addrspace(5)
-// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test_float_post_dc.n to ptr), float 1.000000e+00 seq_cst, align 4, !amdgpu.no.fine.grained.memory [[META3]]
-// UNSAFE-NEXT: ret float [[TMP0]]
+// CHECK-LABEL: define dso_local float @test_float_post_dc(
+// CHECK-SAME: ) #[[ATTR0:[0-9]+]] {
+// CHECK-NEXT: [[ENTRY:.*:]]
+// CHECK-NEXT: [[RETVAL:%.*]] = alloca float, align 4, addrspace(5)
+// CHECK-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
+// CHECK-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test_float_post_dc.n to ptr), float 1.000000e+00 seq_cst, align 4, !amdgpu.no.fine.grained.memory [[META3:![0-9]+]], !amdgpu.no.remote.memory [[META3]]
+// CHECK-NEXT: ret float [[TMP0]]
//
float test_float_post_dc()
{
@@ -46,23 +38,14 @@ float test_float_post_dc()
return n--;
}
-// SAFE-LABEL: define dso_local float @test_float_pre_dc(
-// SAFE-SAME: ) #[[ATTR0]] {
-// SAFE-NEXT: [[ENTRY:.*:]]
-// SAFE-NEXT: [[RETVAL:%.*]] = alloca float, align 4, addrspace(5)
-// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test_float_pre_dc.n to ptr), float 1.000000e+00 seq_cst, align 4
-// SAFE-NEXT: [[TMP1:%.*]] = fsub float [[TMP0]], 1.000000e+00
-// SAFE-NEXT: ret float [[TMP1]]
-//
-// UNSAFE-LABEL: define dso_local float @test_float_pre_dc(
-// UNSAFE-SAME: ) #[[ATTR0]] {
-// UNSAFE-NEXT: [[ENTRY:.*:]]
-// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca float, align 4, addrspace(5)
-// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test_float_pre_dc.n to ptr), float 1.000000e+00 seq_cst, align 4, !amdgpu.no.fine.grained.memory [[META3]]
-// UNSAFE-NEXT: [[TMP1:%.*]] = fsub float [[TMP0]], 1.000000e+00
-// UNSAFE-NEXT: ret float [[TMP1]]
+// CHECK-LABEL: define dso_local float @test_float_pre_dc(
+// CHECK-SAME: ) #[[ATTR0]] {
+// CHECK-NEXT: [[ENTRY:.*:]]
+// CHECK-NEXT: [[RETVAL:%.*]] = alloca float, align 4, addrspace(5)
+// CHECK-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
+// CHECK-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test_float_pre_dc.n to ptr), float 1.000000e+00 seq_cst, align 4, !amdgpu.no.fine.grained.memory [[META3]], !amdgpu.no.remote.memory [[META3]]
+// CHECK-NEXT: [[TMP1:%.*]] = fsub float [[TMP0]], 1.000000e+00
+// CHECK-NEXT: ret float [[TMP1]]
//
float test_float_pre_dc()
{
@@ -75,7 +58,7 @@ float test_float_pre_dc()
// SAFE-NEXT: [[ENTRY:.*:]]
// SAFE-NEXT: [[RETVAL:%.*]] = alloca float, align 4, addrspace(5)
// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_float_pre_inc.n to ptr), float 1.000000e+00 seq_cst, align 4
+// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_float_pre_inc.n to ptr), float 1.000000e+00 seq_cst, align 4, !amdgpu.no.fine.grained.memory [[META3]], !amdgpu.no.remote.memory [[META3]]
// SAFE-NEXT: [[TMP1:%.*]] = fadd float [[TMP0]], 1.000000e+00
// SAFE-NEXT: ret float [[TMP1]]
//
@@ -84,7 +67,7 @@ float test_float_pre_dc()
// UNSAFE-NEXT: [[ENTRY:.*:]]
// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca float, align 4, addrspace(5)
// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_float_pre_inc.n to ptr), float 1.000000e+00 seq_cst, align 4, !amdgpu.no.fine.grained.memory [[META3]], !amdgpu.ignore.denormal.mode [[META3]]
+// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_float_pre_inc.n to ptr), float 1.000000e+00 seq_cst, align 4, !amdgpu.no.fine.grained.memory [[META3]], !amdgpu.no.remote.memory [[META3]], !amdgpu.ignore.denormal.mode [[META3]]
// UNSAFE-NEXT: [[TMP1:%.*]] = fadd float [[TMP0]], 1.000000e+00
// UNSAFE-NEXT: ret float [[TMP1]]
//
@@ -94,21 +77,13 @@ float test_float_pre_inc()
return ++n;
}
-// SAFE-LABEL: define dso_local double @test_double_post_inc(
-// SAFE-SAME: ) #[[ATTR0]] {
-// SAFE-NEXT: [[ENTRY:.*:]]
-// SAFE-NEXT: [[RETVAL:%.*]] = alloca double, align 8, addrspace(5)
-// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_double_post_inc.n to ptr), double 1.000000e+00 seq_cst, align 8
-// SAFE-NEXT: ret double [[TMP0]]
-//
-// UNSAFE-LABEL: define dso_local double @test_double_post_inc(
-// UNSAFE-SAME: ) #[[ATTR0]] {
-// UNSAFE-NEXT: [[ENTRY:.*:]]
-// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca double, align 8, addrspace(5)
-// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_double_post_inc.n to ptr), double 1.000000e+00 seq_cst, align 8, !amdgpu.no.fine.grained.memory [[META3]]
-// UNSAFE-NEXT: ret double [[TMP0]]
+// CHECK-LABEL: define dso_local double @test_double_post_inc(
+// CHECK-SAME: ) #[[ATTR0]] {
+// CHECK-NEXT: [[ENTRY:.*:]]
+// CHECK-NEXT: [[RETVAL:%.*]] = alloca double, align 8, addrspace(5)
+// CHECK-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
+// CHECK-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_double_post_inc.n to ptr), double 1.000000e+00 seq_cst, align 8, !amdgpu.no.fine.grained.memory [[META3]], !amdgpu.no.remote.memory [[META3]]
+// CHECK-NEXT: ret double [[TMP0]]
//
double test_double_post_inc()
{
@@ -116,21 +91,13 @@ double test_double_post_inc()
return n++;
}
-// SAFE-LABEL: define dso_local double @test_double_post_dc(
-// SAFE-SAME: ) #[[ATTR0]] {
-// SAFE-NEXT: [[ENTRY:.*:]]
-// SAFE-NEXT: [[RETVAL:%.*]] = alloca double, align 8, addrspace(5)
-// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test_double_post_dc.n to ptr), double 1.000000e+00 seq_cst, align 8
-// SAFE-NEXT: ret double [[TMP0]]
-//
-// UNSAFE-LABEL: define dso_local double @test_double_post_dc(
-// UNSAFE-SAME: ) #[[ATTR0]] {
-// UNSAFE-NEXT: [[ENTRY:.*:]]
-// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca double, align 8, addrspace(5)
-// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test_double_post_dc.n to ptr), double 1.000000e+00 seq_cst, align 8, !amdgpu.no.fine.grained.memory [[META3]]
-// UNSAFE-NEXT: ret double [[TMP0]]
+// CHECK-LABEL: define dso_local double @test_double_post_dc(
+// CHECK-SAME: ) #[[ATTR0]] {
+// CHECK-NEXT: [[ENTRY:.*:]]
+// CHECK-NEXT: [[RETVAL:%.*]] = alloca double, align 8, addrspace(5)
+// CHECK-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
+// CHECK-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test_double_post_dc.n to ptr), double 1.000000e+00 seq_cst, align 8, !amdgpu.no.fine.grained.memory [[META3]], !amdgpu.no.remote.memory [[META3]]
+// CHECK-NEXT: ret double [[TMP0]]
//
double test_double_post_dc()
{
@@ -138,23 +105,14 @@ double test_double_post_dc()
return n--;
}
-// SAFE-LABEL: define dso_local double @test_double_pre_dc(
-// SAFE-SAME: ) #[[ATTR0]] {
-// SAFE-NEXT: [[ENTRY:.*:]]
-// SAFE-NEXT: [[RETVAL:%.*]] = alloca double, align 8, addrspace(5)
-// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test_double_pre_dc.n to ptr), double 1.000000e+00 seq_cst, align 8
-// SAFE-NEXT: [[TMP1:%.*]] = fsub double [[TMP0]], 1.000000e+00
-// SAFE-NEXT: ret double [[TMP1]]
-//
-// UNSAFE-LABEL: define dso_local double @test_double_pre_dc(
-// UNSAFE-SAME: ) #[[ATTR0]] {
-// UNSAFE-NEXT: [[ENTRY:.*:]]
-// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca double, align 8, addrspace(5)
-// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test_double_pre_dc.n to ptr), double 1.000000e+00 seq_cst, align 8, !amdgpu.no.fine.grained.memory [[META3]]
-// UNSAFE-NEXT: [[TMP1:%.*]] = fsub double [[TMP0]], 1.000000e+00
-// UNSAFE-NEXT: ret double [[TMP1]]
+// CHECK-LABEL: define dso_local double @test_double_pre_dc(
+// CHECK-SAME: ) #[[ATTR0]] {
+// CHECK-NEXT: [[ENTRY:.*:]]
+// CHECK-NEXT: [[RETVAL:%.*]] = alloca double, align 8, addrspace(5)
+// CHECK-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
+// CHECK-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test_double_pre_dc.n to ptr), double 1.000000e+00 seq_cst, align 8, !amdgpu.no.fine.grained.memory [[META3]], !amdgpu.no.remote.memory [[META3]]
+// CHECK-NEXT: [[TMP1:%.*]] = fsub double [[TMP0]], 1.000000e+00
+// CHECK-NEXT: ret double [[TMP1]]
//
double test_double_pre_dc()
{
@@ -162,23 +120,14 @@ double test_double_pre_dc()
return --n;
}
-// SAFE-LABEL: define dso_local double @test_double_pre_inc(
-// SAFE-SAME: ) #[[ATTR0]] {
-// SAFE-NEXT: [[ENTRY:.*:]]
-// SAFE-NEXT: [[RETVAL:%.*]] = alloca double, align 8, addrspace(5)
-// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_double_pre_inc.n to ptr), double 1.000000e+00 seq_cst, align 8
-// SAFE-NEXT: [[TMP1:%.*]] = fadd double [[TMP0]], 1.000000e+00
-// SAFE-NEXT: ret double [[TMP1]]
-//
-// UNSAFE-LABEL: define dso_local double @test_double_pre_inc(
-// UNSAFE-SAME: ) #[[ATTR0]] {
-// UNSAFE-NEXT: [[ENTRY:.*:]]
-// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca double, align 8, addrspace(5)
-// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_double_pre_inc.n to ptr), double 1.000000e+00 seq_cst, align 8, !amdgpu.no.fine.grained.memory [[META3]]
-// UNSAFE-NEXT: [[TMP1:%.*]] = fadd double [[TMP0]], 1.000000e+00
-// UNSAFE-NEXT: ret double [[TMP1]]
+// CHECK-LABEL: define dso_local double @test_double_pre_inc(
+// CHECK-SAME: ) #[[ATTR0]] {
+// CHECK-NEXT: [[ENTRY:.*:]]
+// CHECK-NEXT: [[RETVAL:%.*]] = alloca double, align 8, addrspace(5)
+// CHECK-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
+// CHECK-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test_double_pre_inc.n to ptr), double 1.000000e+00 seq_cst, align 8, !amdgpu.no.fine.grained.memory [[META3]], !amdgpu.no.remote.memory [[META3]]
+// CHECK-NEXT: [[TMP1:%.*]] = fadd double [[TMP0]], 1.000000e+00
+// CHECK-NEXT: ret double [[TMP1]]
//
double test_double_pre_inc()
{
@@ -186,21 +135,13 @@ double test_double_pre_inc()
return ++n;
}
-// SAFE-LABEL: define dso_local half @test__Float16_post_inc(
-// SAFE-SAME: ) #[[ATTR0]] {
-// SAFE-NEXT: [[ENTRY:.*:]]
-// SAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
-// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2
-// SAFE-NEXT: ret half [[TMP0]]
-//
-// UNSAFE-LABEL: define dso_local half @test__Float16_post_inc(
-// UNSAFE-SAME: ) #[[ATTR0]] {
-// UNSAFE-NEXT: [[ENTRY:.*:]]
-// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
-// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
-// UNSAFE-NEXT: ret half [[TMP0]]
+// CHECK-LABEL: define dso_local half @test__Float16_post_inc(
+// CHECK-SAME: ) #[[ATTR0]] {
+// CHECK-NEXT: [[ENTRY:.*:]]
+// CHECK-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
+// CHECK-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
+// CHECK-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]], !amdgpu.no.remote.memory [[META3]]
+// CHECK-NEXT: ret half [[TMP0]]
//
_Float16 test__Float16_post_inc()
{
@@ -208,21 +149,13 @@ _Float16 test__Float16_post_inc()
return n++;
}
-// SAFE-LABEL: define dso_local half @test__Float16_post_dc(
-// SAFE-SAME: ) #[[ATTR0]] {
-// SAFE-NEXT: [[ENTRY:.*:]]
-// SAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
-// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2
-// SAFE-NEXT: ret half [[TMP0]]
-//
-// UNSAFE-LABEL: define dso_local half @test__Float16_post_dc(
-// UNSAFE-SAME: ) #[[ATTR0]] {
-// UNSAFE-NEXT: [[ENTRY:.*:]]
-// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
-// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
-// UNSAFE-NEXT: ret half [[TMP0]]
+// CHECK-LABEL: define dso_local half @test__Float16_post_dc(
+// CHECK-SAME: ) #[[ATTR0]] {
+// CHECK-NEXT: [[ENTRY:.*:]]
+// CHECK-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
+// CHECK-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
+// CHECK-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]], !amdgpu.no.remote.memory [[META3]]
+// CHECK-NEXT: ret half [[TMP0]]
//
_Float16 test__Float16_post_dc()
{
@@ -230,23 +163,14 @@ _Float16 test__Float16_post_dc()
return n--;
}
-// SAFE-LABEL: define dso_local half @test__Float16_pre_dc(
-// SAFE-SAME: ) #[[ATTR0]] {
-// SAFE-NEXT: [[ENTRY:.*:]]
-// SAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
-// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_dc.n to ptr), half 0xH3C00 seq_cst, align 2
-// SAFE-NEXT: [[TMP1:%.*]] = fsub half [[TMP0]], 0xH3C00
-// SAFE-NEXT: ret half [[TMP1]]
-//
-// UNSAFE-LABEL: define dso_local half @test__Float16_pre_dc(
-// UNSAFE-SAME: ) #[[ATTR0]] {
-// UNSAFE-NEXT: [[ENTRY:.*:]]
-// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
-// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
-// UNSAFE-NEXT: [[TMP1:%.*]] = fsub half [[TMP0]], 0xH3C00
-// UNSAFE-NEXT: ret half [[TMP1]]
+// CHECK-LABEL: define dso_local half @test__Float16_pre_dc(
+// CHECK-SAME: ) #[[ATTR0]] {
+// CHECK-NEXT: [[ENTRY:.*:]]
+// CHECK-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
+// CHECK-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
+// CHECK-NEXT: [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]], !amdgpu.no.remote.memory [[META3]]
+// CHECK-NEXT: [[TMP1:%.*]] = fsub half [[TMP0]], 0xH3C00
+// CHECK-NEXT: ret half [[TMP1]]
//
_Float16 test__Float16_pre_dc()
{
@@ -254,23 +178,14 @@ _Float16 test__Float16_pre_dc()
return --n;
}
-// SAFE-LABEL: define dso_local half @test__Float16_pre_inc(
-// SAFE-SAME: ) #[[ATTR0]] {
-// SAFE-NEXT: [[ENTRY:.*:]]
-// SAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
-// SAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_inc.n to ptr), half 0xH3C00 seq_cst, align 2
-// SAFE-NEXT: [[TMP1:%.*]] = fadd half [[TMP0]], 0xH3C00
-// SAFE-NEXT: ret half [[TMP1]]
-//
-// UNSAFE-LABEL: define dso_local half @test__Float16_pre_inc(
-// UNSAFE-SAME: ) #[[ATTR0]] {
-// UNSAFE-NEXT: [[ENTRY:.*:]]
-// UNSAFE-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
-// UNSAFE-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
-// UNSAFE-NEXT: [[TMP1:%.*]] = fadd half [[TMP0]], 0xH3C00
-// UNSAFE-NEXT: ret half [[TMP1]]
+// CHECK-LABEL: define dso_local half @test__Float16_pre_inc(
+// CHECK-SAME: ) #[[ATTR0]] {
+// CHECK-NEXT: [[ENTRY:.*:]]
+// CHECK-NEXT: [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
+// CHECK-NEXT: [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
+// CHECK-NEXT: [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_pre_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]], !amdgpu.no.remote.memory [[META3]]
+// CHECK-NEXT: [[TMP1:%.*]] = fadd half [[TMP0]], 0xH3C00
+// CHECK-NEXT: ret half [[TMP1]]
//
_Float16 test__Float16_pre_inc()
{
@@ -278,7 +193,7 @@ _Float16 test__Float16_pre_inc()
return ++n;
}
//.
+// SAFE: [[META3]] = !{}
+//.
// UNSAFE: [[META3]] = !{}
//.
-//// NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line:
-// CHECK: {{.*}}
diff --git a/clang/test/CodeGenCUDA/amdgpu-atomic-ops.cu b/clang/test/CodeGenCUDA/amdgpu-atomic-ops.cu
index 47fa3967fe2373..5eefe660688a53 100644
--- a/clang/test/CodeGenCUDA/amdgpu-atomic-ops.cu
+++ b/clang/test/CodeGenCUDA/amdgpu-atomic-ops.cu
@@ -1,19 +1,19 @@
// RUN: %clang_cc1 -x hip %s -emit-llvm -o - -triple=amdgcn-amd-amdhsa \
// RUN: -fcuda-is-device -target-cpu gfx906 -fnative-half-type \
-// RUN: -fnative-half-arguments-and-returns | FileCheck -check-prefixes=CHECK,SAFEIR %s
+// RUN: -fnative-half-arguments-and-returns | FileCheck -check-prefixes=FUN,CHECK,SAFEIR %s
// RUN: %clang_cc1 -x hip %s -emit-llvm -o - -triple=amdgcn-amd-amdhsa \
// RUN: -fcuda-is-device -target-cpu gfx906 -fnative-half-type \
-// RUN: -fnative-half-arguments-and-returns -munsafe-fp-atomics | FileCheck -check-prefixes=CHECK,UNSAFEIR %s
+// RUN: -fnative-half-arguments-and-returns -munsafe-fp-atomics | FileCheck -check-prefixes=FUN,CHECK,UNSAFEIR %s
// RUN: %clang_cc1 -x hip %s -O3 -S -o - -triple=amdgcn-amd-amdhsa \
// RUN: -fcuda-is-device -target-cpu gfx1100 -fnative-half-type \
-// RUN: -fnative-half-arguments-and-returns | FileCheck -check-prefix=SAFE %s
+// RUN: -fnative-half-arguments-and-returns | FileCheck -check-prefixes=FUN,SAFE %s
// RUN: %clang_cc1 -x hip %s -O3 -S -o - -triple=amdgcn-amd-amdhsa \
// RUN: -fcuda-is-device -target-cpu gfx940 -fnative-half-type \
// RUN: -fnative-half-arguments-and-returns -munsafe-fp-atomics \
-// RUN: | FileCheck -check-prefix=UNSAFE %s
+// RUN: | FileCheck -check-prefixes=FUN,UNSAFE %s
// REQUIRES: amdgpu-registered-target
@@ -21,34 +21,32 @@
#include <stdatomic.h>
__global__ void ffp1(float *p) {
- // CHECK-LABEL: @_Z4ffp1Pf
- // SAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 4{{$}}
- // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 4{{$}}
- // SAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 4{{$}}
- // SAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 4{{$}}
- // SAFEIR: atomicrmw fadd ptr {{.*}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE:[0-9]+]]{{$}}
- // SAFEIR: atomicrmw fsub ptr {{.*}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
- // SAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
- // SAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
-
- // UNSAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 4, !amdgpu.no.fine.grained.memory !{{[0-9]+}}, !amdgpu.ignore.denormal.mode !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 4, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 4, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 4, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE:[0-9]+]], !amdgpu.no.fine.grained.memory !{{[0-9]+}}, !amdgpu.ignore.denormal.mode !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
-
- // SAFE: _Z4ffp1Pf
- // SAFE: global_atomic_cmpswap
- // SAFE: global_atomic_cmpswap
- // SAFE: global_atomic_cmpswap
- // SAFE: global_atomic_cmpswap
- // SAFE: global_atomic_cmpswap
+ // FUN-LABEL: @_Z4ffp1Pf
+ // SAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 4, [[DEFMD:!amdgpu.no.fine.grained.memory ![0-9]+, !amdgpu.no.remote.memory ![0-9]+$]]
+ // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 4, [[DEFMD]]
+ // SAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 4, [[DEFMD]]
+ // SAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 4, [[DEFMD]]
+ // SAFEIR: atomicrmw fadd ptr {{.*}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE:[0-9]+]], [[DEFMD]]
+ // SAFEIR: atomicrmw fsub ptr {{.*}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // SAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // SAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+
+ // UNSAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 4, [[FADDMD:!amdgpu.no.fine.grained.memory ![0-9]+, !amdgpu.no.remote.memory ![0-9]+, !amdgpu.ignore.denormal.mode ![0-9]+$]]
+ // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 4, [[DEFMD:!amdgpu.no.fine.grained.memory ![0-9]+, !amdgpu.no.remote.memory ![0-9]+$]]
+ // UNSAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 4, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 4, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE:[0-9]+]], [[FADDMD]]
+ // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+
+ // SAFE: global_atomic_add_f32
// SAFE: global_atomic_cmpswap
+ // SAFE: global_atomic_max
+ // SAFE: global_atomic_min
+ // SAFE: global_atomic_max
+ // SAFE: global_atomic_min
- // UNSAFE: _Z4ffp1Pf
// UNSAFE: global_atomic_add_f32
// UNSAFE: global_atomic_cmpswap
// UNSAFE: global_atomic_cmpswap
@@ -68,26 +66,25 @@ __global__ void ffp1(float *p) {
}
__global__ void ffp2(double *p) {
- // CHECK-LABEL: @_Z4ffp2Pd
- // SAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 8{{$}}
- // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8{{$}}
- // SAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 8{{$}}
- // SAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 8{{$}}
- // SAFEIR: atomicrmw fadd ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
- // SAFEIR: atomicrmw fsub ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
- // SAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
- // SAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
-
- // UNSAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 8, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 8, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 8, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fadd ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fsub ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
-
- // SAFE-LABEL: @_Z4ffp2Pd
+ // FUN-LABEL: @_Z4ffp2Pd
+ // SAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // SAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // SAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // SAFEIR: atomicrmw fadd ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // SAFEIR: atomicrmw fsub ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // SAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // SAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+
+ // UNSAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fadd ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // UNSAFEIR: atomicrmw fsub ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+
// SAFE: global_atomic_cmpswap_b64
// SAFE: global_atomic_cmpswap_b64
// SAFE: global_atomic_cmpswap_b64
@@ -95,7 +92,6 @@ __global__ void ffp2(double *p) {
// SAFE: global_atomic_cmpswap_b64
// SAFE: global_atomic_cmpswap_b64
- // UNSAFE-LABEL: @_Z4ffp2Pd
// UNSAFE: global_atomic_add_f64
// UNSAFE: global_atomic_cmpswap_x2
// UNSAFE: global_atomic_max_f64
@@ -114,32 +110,31 @@ __global__ void ffp2(double *p) {
// long double is the same as double for amdgcn.
__global__ void ffp3(long double *p) {
- // CHECK-LABEL: @_Z4ffp3Pe
- // SAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 8{{$}}
- // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8{{$}}
- // SAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 8{{$}}
- // SAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 8{{$}}
- // SAFEIR: atomicrmw fadd ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
- // SAFEIR: atomicrmw fsub ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
- // SAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
- // SAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
-
- // UNSAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 8, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 8, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 8, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fadd ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fsub ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
-
- // SAFE-LABEL: @_Z4ffp3Pe
+ // FUN-LABEL: @_Z4ffp3Pe
+ // SAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // SAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // SAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // SAFEIR: atomicrmw fadd ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // SAFEIR: atomicrmw fsub ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // SAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // SAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+
+ // UNSAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fadd ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // UNSAFEIR: atomicrmw fsub ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+
// SAFE: global_atomic_cmpswap_b64
// SAFE: global_atomic_cmpswap_b64
// SAFE: global_atomic_cmpswap_b64
// SAFE: global_atomic_cmpswap_b64
// SAFE: global_atomic_cmpswap_b64
- // UNSAFE-LABEL: @_Z4ffp3Pe
+
// UNSAFE: global_atomic_cmpswap_x2
// UNSAFE: global_atomic_max_f64
// UNSAFE: global_atomic_min_f64
@@ -156,48 +151,48 @@ __global__ void ffp3(long double *p) {
}
__device__ double ffp4(double *p, float f) {
- // CHECK-LABEL: @_Z4ffp4Pdf
+ // FUN-LABEL: @_Z4ffp4Pdf
// CHECK: fpext contract float {{.*}} to double
- // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8{{$}}
- // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
+ // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, [[DEFMD]]
- // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
- // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
+ // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
__atomic_fetch_sub(p, f, memory_order_relaxed);
return __hip_atomic_fetch_sub(p, f, memory_order_relaxed, __HIP_MEMORY_SCOPE_AGENT);
}
__device__ double ffp5(double *p, int i) {
- // CHECK-LABEL: @_Z4ffp5Pdi
+ // FUN-LABEL: @_Z4ffp5Pdi
// CHECK: sitofp i32 {{.*}} to double
- // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8{{$}}
- // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
+ // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, [[DEFMD]]
__atomic_fetch_sub(p, i, memory_order_relaxed);
- // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
- // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
+ // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 8, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
return __hip_atomic_fetch_sub(p, i, memory_order_relaxed, __HIP_MEMORY_SCOPE_AGENT);
}
__global__ void ffp6(_Float16 *p) {
- // CHECK-LABEL: @_Z4ffp6PDF16
- // SAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 2{{$}}
- // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 2{{$}}
- // SAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 2{{$}}
- // SAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 2{{$}}
- // SAFEIR: atomicrmw fadd ptr {{.*}} syncscope("agent") monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
- // SAFEIR: atomicrmw fsub ptr {{.*}} syncscope("workgroup") monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
- // SAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
- // SAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]]{{$}}
-
- // UNSAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 2, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 2, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 2, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 2, !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
- // UNSAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]], !amdgpu.no.fine.grained.memory !{{[0-9]+$}}
+ // FUN-LABEL: @_Z4ffp6PDF16
+ // SAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 2, [[DEFMD]]
+ // SAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 2, [[DEFMD]]
+ // SAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 2, [[DEFMD]]
+ // SAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 2, [[DEFMD]]
+ // SAFEIR: atomicrmw fadd ptr {{.*}} syncscope("agent") monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // SAFEIR: atomicrmw fsub ptr {{.*}} syncscope("workgroup") monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // SAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // SAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+
+ // UNSAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 2, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 2, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmax ptr {{.*}} monotonic, align 2, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmin ptr {{.*}} monotonic, align 2, [[DEFMD]]
+ // UNSAFEIR: atomicrmw fadd ptr {{.*}} monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // UNSAFEIR: atomicrmw fsub ptr {{.*}} monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmax ptr {{.*}} syncscope("agent") monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
+ // UNSAFEIR: atomicrmw fmin ptr {{.*}} syncscope("workgroup") monotonic, align 2, !noalias.addrspace ![[$NO_PRIVATE]], [[DEFMD]]
// SAFE: _Z4ffp6PDF16
// SAFE: global_atomic_cmpswap
diff --git a/clang/test/CodeGenCUDA/atomic-ops.cu b/clang/test/CodeGenCUDA/atomic-ops.cu
index d8489b438015d0..a41e6a6fb2dc72 100644
--- a/clang/test/CodeGenCUDA/atomic-ops.cu
+++ b/clang/test/CodeGenCUDA/atomic-ops.cu
@@ -4,14 +4,14 @@
// CHECK-LABEL: @_Z24atomic32_op_singlethreadPiii
// CHECK: cmpxchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK:[0-9]+]]{{$}}
// CHECK: cmpxchg weak ptr {{%[0-9]+}}, i32 {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw add ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw and ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw or ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw min ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw max ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD:!amdgpu.no.fine.grained.memory ![0-9]+, !amdgpu.no.remote.memory ![0-9]+$]]
+// CHECK: atomicrmw add ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw and ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw or ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw min ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw max ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: load atomic i32, ptr {{%[0-9]+}} syncscope("singlethread") monotonic, align 4{{$}}
// CHECK: store atomic i32 %{{.*}}, ptr %{{.*}} syncscope("singlethread") monotonic, align 4{{$}}
__device__ int atomic32_op_singlethread(int *ptr, int val, int desired) {
@@ -31,8 +31,8 @@ __device__ int atomic32_op_singlethread(int *ptr, int val, int desired) {
}
// CHECK-LABEL: @_Z25atomicu32_op_singlethreadPjjj
-// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("singlethread") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
__device__ unsigned int atomicu32_op_singlethread(unsigned int *ptr, unsigned int val, unsigned int desired) {
val = __hip_atomic_fetch_min(ptr, val, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_SINGLETHREAD);
val = __hip_atomic_fetch_max(ptr, val, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_SINGLETHREAD);
@@ -42,14 +42,14 @@ __device__ unsigned int atomicu32_op_singlethread(unsigned int *ptr, unsigned in
// CHECK-LABEL: @_Z21atomic32_op_wavefrontPiii
// CHECK: cmpxchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
// CHECK: cmpxchg weak ptr {{%[0-9]+}}, i32 {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw add ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw and ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw or ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw min ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw max ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw add ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw and ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw or ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw min ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw max ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: load atomic i32, ptr {{%[0-9]+}} syncscope("wavefront") monotonic, align 4{{$}}
// CHECK: store atomic i32 %{{.*}}, ptr %{{.*}} syncscope("wavefront") monotonic, align 4{{$}}
__device__ int atomic32_op_wavefront(int *ptr, int val, int desired) {
@@ -69,8 +69,8 @@ __device__ int atomic32_op_wavefront(int *ptr, int val, int desired) {
}
// CHECK-LABEL: @_Z22atomicu32_op_wavefrontPjjj
-// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("wavefront") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
__device__ unsigned int atomicu32_op_wavefront(unsigned int *ptr, unsigned int val, unsigned int desired) {
val = __hip_atomic_fetch_min(ptr, val, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_WAVEFRONT);
val = __hip_atomic_fetch_max(ptr, val, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_WAVEFRONT);
@@ -80,14 +80,14 @@ __device__ unsigned int atomicu32_op_wavefront(unsigned int *ptr, unsigned int v
// CHECK-LABEL: @_Z21atomic32_op_workgroupPiii
// CHECK: cmpxchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
// CHECK: cmpxchg weak ptr {{%[0-9]+}}, i32 {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw add ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw and ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw or ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw min ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw max ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw add ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw and ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw or ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw min ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw max ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: store atomic i32 %{{.*}}, ptr %{{.*}} syncscope("workgroup") monotonic, align 4{{$}}
__device__ int atomic32_op_workgroup(int *ptr, int val, int desired) {
bool flag = __hip_atomic_compare_exchange_strong(ptr, &val, desired, __ATOMIC_RELAXED, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_WORKGROUP);
@@ -105,8 +105,8 @@ __device__ int atomic32_op_workgroup(int *ptr, int val, int desired) {
}
// CHECK-LABEL: @_Z22atomicu32_op_workgroupPjjj
-// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("workgroup") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
__device__ unsigned int atomicu32_op_workgroup(unsigned int *ptr, unsigned int val, unsigned int desired) {
val = __hip_atomic_fetch_min(ptr, val, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_WORKGROUP);
val = __hip_atomic_fetch_max(ptr, val, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_WORKGROUP);
@@ -116,14 +116,14 @@ __device__ unsigned int atomicu32_op_workgroup(unsigned int *ptr, unsigned int v
// CHECK-LABEL: @_Z17atomic32_op_agentPiii
// CHECK: cmpxchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
// CHECK: cmpxchg weak ptr {{%[0-9]+}}, i32 {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw add ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw and ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw or ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw min ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw max ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw add ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw and ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw or ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw min ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw max ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: store atomic i32 %{{.*}}, ptr %{{.*}} syncscope("agent") monotonic, align 4{{$}}
__device__ int atomic32_op_agent(int *ptr, int val, int desired) {
bool flag = __hip_atomic_compare_exchange_strong(ptr, &val, desired, __ATOMIC_RELAXED, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_AGENT);
@@ -141,8 +141,8 @@ __device__ int atomic32_op_agent(int *ptr, int val, int desired) {
}
// CHECK-LABEL: @_Z18atomicu32_op_agentPjjj
-// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i32 {{%[0-9]+}} syncscope("agent") monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
__device__ unsigned int atomicu32_op_agent(unsigned int *ptr, unsigned int val, unsigned int desired) {
val = __hip_atomic_fetch_min(ptr, val, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_AGENT);
val = __hip_atomic_fetch_max(ptr, val, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_AGENT);
@@ -152,14 +152,14 @@ __device__ unsigned int atomicu32_op_agent(unsigned int *ptr, unsigned int val,
// CHECK-LABEL: @_Z18atomic32_op_systemPiii
// CHECK: cmpxchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
// CHECK: cmpxchg weak ptr {{%[0-9]+}}, i32 {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw add ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw and ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw or ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw min ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw max ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw add ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw and ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw or ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw min ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw max ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: load i32, ptr %{{.*}}, align 4{{$}}
// CHECK: store atomic i32 %{{.*}}, ptr %{{.*}} monotonic, align 4{{$}}
__device__ int atomic32_op_system(int *ptr, int val, int desired) {
@@ -179,8 +179,8 @@ __device__ int atomic32_op_system(int *ptr, int val, int desired) {
}
// CHECK-LABEL: @_Z19atomicu32_op_systemPjjj
-// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i32 {{%[0-9]+}} monotonic, align 4, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
__device__ unsigned int atomicu32_op_system(unsigned int *ptr, unsigned int val, unsigned int desired) {
val = __hip_atomic_fetch_min(ptr, val, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_SYSTEM);
val = __hip_atomic_fetch_max(ptr, val, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_SYSTEM);
@@ -190,14 +190,14 @@ __device__ unsigned int atomicu32_op_system(unsigned int *ptr, unsigned int val,
// CHECK-LABEL: @_Z24atomic64_op_singlethreadPxS_xx
// CHECK: cmpxchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
// CHECK: cmpxchg weak ptr {{%[0-9]+}}, i64 {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw add ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw and ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw or ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw min ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw max ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw add ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw and ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw or ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw min ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw max ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: store atomic i64 %{{.*}}, ptr %{{.*}} syncscope("singlethread") monotonic, align 8{{$}}
__device__ long long atomic64_op_singlethread(long long *ptr, long long *ptr2, long long val, long long desired) {
bool flag = __hip_atomic_compare_exchange_strong(ptr, &val, desired, __ATOMIC_RELAXED, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_SINGLETHREAD);
@@ -215,8 +215,8 @@ __device__ long long atomic64_op_singlethread(long long *ptr, long long *ptr2, l
}
// CHECK-LABEL: @_Z25atomicu64_op_singlethreadPyS_yy
-// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("singlethread") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: load atomic i64, ptr %{{.*}} syncscope("singlethread") monotonic, align 8{{$}}
// CHECK: store atomic i64 %{{.*}}, ptr %{{.*}} syncscope("singlethread") monotonic, align 8{{$}}
__device__ unsigned long long atomicu64_op_singlethread(unsigned long long *ptr, unsigned long long *ptr2, unsigned long long val, unsigned long long desired) {
@@ -230,14 +230,14 @@ __device__ unsigned long long atomicu64_op_singlethread(unsigned long long *ptr,
// CHECK-LABEL: @_Z21atomic64_op_wavefrontPxS_xx
// CHECK: cmpxchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
// CHECK: cmpxchg weak ptr {{%[0-9]+}}, i64 {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw add ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw and ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw or ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw min ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw max ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw add ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw and ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw or ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw min ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw max ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: load atomic i64, ptr {{%[0-9]+}} syncscope("wavefront") monotonic, align 8{{$}}
// CHECK: store atomic i64 %{{.*}}, ptr %{{.*}} syncscope("wavefront") monotonic, align 8{{$}}
__device__ long long atomic64_op_wavefront(long long *ptr, long long *ptr2, long long val, long long desired) {
@@ -257,8 +257,8 @@ __device__ long long atomic64_op_wavefront(long long *ptr, long long *ptr2, long
}
// CHECK-LABEL: @_Z22atomicu64_op_wavefrontPyS_yy
-// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("wavefront") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: load atomic i64, ptr {{%[0-9]+}} syncscope("wavefront") monotonic, align 8{{$}}
// CHECK: store atomic i64 %{{.*}}, ptr %{{.*}} syncscope("wavefront") monotonic, align 8{{$}}
__device__ unsigned long long atomicu64_op_wavefront(unsigned long long *ptr, unsigned long long *ptr2, unsigned long long val, unsigned long long desired) {
@@ -272,14 +272,14 @@ __device__ unsigned long long atomicu64_op_wavefront(unsigned long long *ptr, un
// CHECK-LABEL: @_Z21atomic64_op_workgroupPxS_xx
// CHECK: cmpxchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
// CHECK: cmpxchg weak ptr {{%[0-9]+}}, i64 {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw add ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw and ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw or ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw min ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw max ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw add ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw and ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw or ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw min ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw max ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: store atomic i64 %{{.*}}, ptr %{{.*}} syncscope("workgroup") monotonic, align 8{{$}}
__device__ long long atomic64_op_workgroup(long long *ptr, long long *ptr2, long long val, long long desired) {
bool flag = __hip_atomic_compare_exchange_strong(ptr, &val, desired, __ATOMIC_RELAXED, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_WORKGROUP);
@@ -297,8 +297,8 @@ __device__ long long atomic64_op_workgroup(long long *ptr, long long *ptr2, long
}
// CHECK-LABEL: @_Z22atomicu64_op_workgroupPyS_yy
-// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("workgroup") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: store atomic i64 %{{.*}}, ptr %{{.*}} syncscope("workgroup") monotonic, align 8{{$}}
__device__ unsigned long long atomicu64_op_workgroup(unsigned long long *ptr, unsigned long long *ptr2, unsigned long long val, unsigned long long desired) {
val = __hip_atomic_fetch_min(ptr, val, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_WORKGROUP);
@@ -310,14 +310,14 @@ __device__ unsigned long long atomicu64_op_workgroup(unsigned long long *ptr, un
// CHECK-LABEL: @_Z17atomic64_op_agentPxS_xx
// CHECK: cmpxchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
// CHECK: cmpxchg weak ptr {{%[0-9]+}}, i64 {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw add ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw and ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw or ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw min ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw max ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw add ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw and ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw or ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw min ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw max ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: store atomic i64 %{{.*}}, ptr %{{.*}} syncscope("agent") monotonic, align 8{{$}}
__device__ long long atomic64_op_agent(long long *ptr, long long *ptr2, long long val, long long desired) {
bool flag = __hip_atomic_compare_exchange_strong(ptr, &val, desired, __ATOMIC_RELAXED, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_AGENT);
@@ -335,8 +335,8 @@ __device__ long long atomic64_op_agent(long long *ptr, long long *ptr2, long lon
}
// CHECK-LABEL: @_Z18atomicu64_op_agentPyS_yy
-// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i64 {{%[0-9]+}} syncscope("agent") monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: store atomic i64 %{{.*}}, ptr %{{.*}} syncscope("agent") monotonic, align 8{{$}}
__device__ unsigned long long atomicu64_op_agent(unsigned long long *ptr, unsigned long long *ptr2, unsigned long long val, unsigned long long desired) {
val = __hip_atomic_fetch_min(ptr, val, __ATOMIC_RELAXED, __HIP_MEMORY_SCOPE_AGENT);
@@ -348,14 +348,14 @@ __device__ unsigned long long atomicu64_op_agent(unsigned long long *ptr, unsign
// CHECK-LABEL: @_Z18atomic64_op_systemPxS_xx
// CHECK: cmpxchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
// CHECK: cmpxchg weak ptr {{%[0-9]+}}, i64 {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw add ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw and ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw or ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw min ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw max ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw xchg ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw add ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw sub ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw and ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw or ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw xor ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw min ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw max ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: load i64, ptr %{{.*}}, align 8
// CHECK: store atomic i64 %{{.*}}, ptr %{{.*}} monotonic, align 8{{$}}
__device__ long long atomic64_op_system(long long *ptr, long long *ptr2, long long val, long long desired) {
@@ -375,8 +375,8 @@ __device__ long long atomic64_op_system(long long *ptr, long long *ptr2, long lo
}
// CHECK-LABEL: @_Z19atomicu64_op_systemPyS_yy
-// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
-// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]]{{$}}
+// CHECK: atomicrmw umin ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
+// CHECK: atomicrmw umax ptr {{%[0-9]+}}, i64 {{%[0-9]+}} monotonic, align 8, !noalias.addrspace ![[$NOALIAS_ADDRSPACE_STACK]], [[$DEFMD]]
// CHECK: load i64, ptr %{{.*}}, align 8
// CHECK: store atomic i64 %{{.*}}, ptr %{{.*}} monotonic, align 8{{$}}
__device__ unsigned long long atomicu64_op_system(unsigned long long *ptr, unsigned long long *ptr2, unsigned long long val, unsigned long long desired) {
diff --git a/clang/test/CodeGenCUDA/atomic-options.hip b/clang/test/CodeGenCUDA/atomic-options.hip
new file mode 100644
index 00000000000000..08ac11714f7bfb
--- /dev/null
+++ b/clang/test/CodeGenCUDA/atomic-options.hip
@@ -0,0 +1,456 @@
+// NOTE: Assertions have been autogenerated by utils/update_cc_test_checks.py UTC_ARGS: --version 5
+// RUN: %clang_cc1 -triple x86_64-unknown-linux-gnu \
+// RUN: -emit-llvm -o - %s | FileCheck --check-prefix=HOST %s
+// RUN: %clang_cc1 -triple amdgcn-amd-amdhsa \
+// RUN: -emit-llvm -o - -fcuda-is-device %s | FileCheck --check-prefix=DEV %s
+// RUN: %clang_cc1 -triple amdgcn-amd-amdhsa \
+// RUN: -fatomic=no_fine_grained_memory:off,no_remote_memory:on,ignore_denormal_mode:on \
+// RUN: -emit-llvm -o - -fcuda-is-device %s | FileCheck --check-prefix=OPT %s
+
+#include "Inputs/cuda.h"
+
+// HOST-LABEL: define dso_local void @_Z12test_defaultPf(
+// HOST-SAME: ptr noundef [[A:%.*]]) #[[ATTR0:[0-9]+]] {
+// HOST-NEXT: [[ENTRY:.*:]]
+// HOST-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8
+// HOST-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4
+// HOST-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4
+// HOST-NEXT: store ptr [[A]], ptr [[A_ADDR]], align 8
+// HOST-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR]], align 8
+// HOST-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP]], align 4
+// HOST-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP]], align 4
+// HOST-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4
+// HOST-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP]], align 4
+// HOST-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP]], align 4
+// HOST-NEXT: ret void
+//
+// DEV-LABEL: define dso_local void @_Z12test_defaultPf(
+// DEV-SAME: ptr noundef [[A:%.*]]) #[[ATTR0:[0-9]+]] {
+// DEV-NEXT: [[ENTRY:.*:]]
+// DEV-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8, addrspace(5)
+// DEV-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[A_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[A_ADDR]] to ptr
+// DEV-NEXT: [[DOTATOMICTMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP]] to ptr
+// DEV-NEXT: [[ATOMIC_TEMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP]] to ptr
+// DEV-NEXT: store ptr [[A]], ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4, !amdgpu.no.fine.grained.memory [[META4:![0-9]+]], !amdgpu.no.remote.memory [[META4]]
+// DEV-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// DEV-NEXT: ret void
+//
+// OPT-LABEL: define dso_local void @_Z12test_defaultPf(
+// OPT-SAME: ptr noundef [[A:%.*]]) #[[ATTR0:[0-9]+]] {
+// OPT-NEXT: [[ENTRY:.*:]]
+// OPT-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8, addrspace(5)
+// OPT-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[A_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[A_ADDR]] to ptr
+// OPT-NEXT: [[DOTATOMICTMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP]] to ptr
+// OPT-NEXT: [[ATOMIC_TEMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP]] to ptr
+// OPT-NEXT: store ptr [[A]], ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4, !amdgpu.no.remote.memory [[META4:![0-9]+]], !amdgpu.ignore.denormal.mode [[META4]]
+// OPT-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// OPT-NEXT: ret void
+//
+__device__ __host__ void test_default(float *a) {
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+}
+
+// HOST-LABEL: define dso_local void @_Z8test_onePf(
+// HOST-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] {
+// HOST-NEXT: [[ENTRY:.*:]]
+// HOST-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8
+// HOST-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4
+// HOST-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4
+// HOST-NEXT: store ptr [[A]], ptr [[A_ADDR]], align 8
+// HOST-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR]], align 8
+// HOST-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP]], align 4
+// HOST-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP]], align 4
+// HOST-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4
+// HOST-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP]], align 4
+// HOST-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP]], align 4
+// HOST-NEXT: ret void
+//
+// DEV-LABEL: define dso_local void @_Z8test_onePf(
+// DEV-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] {
+// DEV-NEXT: [[ENTRY:.*:]]
+// DEV-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8, addrspace(5)
+// DEV-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[A_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[A_ADDR]] to ptr
+// DEV-NEXT: [[DOTATOMICTMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP]] to ptr
+// DEV-NEXT: [[ATOMIC_TEMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP]] to ptr
+// DEV-NEXT: store ptr [[A]], ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4, !amdgpu.no.fine.grained.memory [[META4]], !amdgpu.no.remote.memory [[META4]]
+// DEV-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// DEV-NEXT: ret void
+//
+// OPT-LABEL: define dso_local void @_Z8test_onePf(
+// OPT-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] {
+// OPT-NEXT: [[ENTRY:.*:]]
+// OPT-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8, addrspace(5)
+// OPT-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[A_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[A_ADDR]] to ptr
+// OPT-NEXT: [[DOTATOMICTMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP]] to ptr
+// OPT-NEXT: [[ATOMIC_TEMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP]] to ptr
+// OPT-NEXT: store ptr [[A]], ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4, !amdgpu.no.remote.memory [[META4]], !amdgpu.ignore.denormal.mode [[META4]]
+// OPT-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// OPT-NEXT: ret void
+//
+__device__ __host__ void test_one(float *a) {
+ [[clang::atomic(no_remote_memory)]] {
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ }
+}
+
+// HOST-LABEL: define dso_local void @_Z8test_twoPf(
+// HOST-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] {
+// HOST-NEXT: [[ENTRY:.*:]]
+// HOST-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8
+// HOST-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4
+// HOST-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4
+// HOST-NEXT: store ptr [[A]], ptr [[A_ADDR]], align 8
+// HOST-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR]], align 8
+// HOST-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP]], align 4
+// HOST-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP]], align 4
+// HOST-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4
+// HOST-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP]], align 4
+// HOST-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP]], align 4
+// HOST-NEXT: ret void
+//
+// DEV-LABEL: define dso_local void @_Z8test_twoPf(
+// DEV-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] {
+// DEV-NEXT: [[ENTRY:.*:]]
+// DEV-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8, addrspace(5)
+// DEV-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[A_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[A_ADDR]] to ptr
+// DEV-NEXT: [[DOTATOMICTMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP]] to ptr
+// DEV-NEXT: [[ATOMIC_TEMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP]] to ptr
+// DEV-NEXT: store ptr [[A]], ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4, !amdgpu.no.fine.grained.memory [[META4]], !amdgpu.ignore.denormal.mode [[META4]]
+// DEV-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// DEV-NEXT: ret void
+//
+// OPT-LABEL: define dso_local void @_Z8test_twoPf(
+// OPT-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] {
+// OPT-NEXT: [[ENTRY:.*:]]
+// OPT-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8, addrspace(5)
+// OPT-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[A_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[A_ADDR]] to ptr
+// OPT-NEXT: [[DOTATOMICTMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP]] to ptr
+// OPT-NEXT: [[ATOMIC_TEMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP]] to ptr
+// OPT-NEXT: store ptr [[A]], ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4, !amdgpu.ignore.denormal.mode [[META4]]
+// OPT-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// OPT-NEXT: ret void
+//
+__device__ __host__ void test_two(float *a) {
+ [[clang::atomic(!no_remote_memory, ignore_denormal_mode)]] {
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ }
+}
+
+// HOST-LABEL: define dso_local void @_Z10test_threePf(
+// HOST-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] {
+// HOST-NEXT: [[ENTRY:.*:]]
+// HOST-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8
+// HOST-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4
+// HOST-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4
+// HOST-NEXT: store ptr [[A]], ptr [[A_ADDR]], align 8
+// HOST-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR]], align 8
+// HOST-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP]], align 4
+// HOST-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP]], align 4
+// HOST-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4
+// HOST-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP]], align 4
+// HOST-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP]], align 4
+// HOST-NEXT: ret void
+//
+// DEV-LABEL: define dso_local void @_Z10test_threePf(
+// DEV-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] {
+// DEV-NEXT: [[ENTRY:.*:]]
+// DEV-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8, addrspace(5)
+// DEV-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[A_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[A_ADDR]] to ptr
+// DEV-NEXT: [[DOTATOMICTMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP]] to ptr
+// DEV-NEXT: [[ATOMIC_TEMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP]] to ptr
+// DEV-NEXT: store ptr [[A]], ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4, !amdgpu.no.remote.memory [[META4]]
+// DEV-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// DEV-NEXT: ret void
+//
+// OPT-LABEL: define dso_local void @_Z10test_threePf(
+// OPT-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] {
+// OPT-NEXT: [[ENTRY:.*:]]
+// OPT-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8, addrspace(5)
+// OPT-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[A_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[A_ADDR]] to ptr
+// OPT-NEXT: [[DOTATOMICTMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP]] to ptr
+// OPT-NEXT: [[ATOMIC_TEMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP]] to ptr
+// OPT-NEXT: store ptr [[A]], ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4, !amdgpu.no.remote.memory [[META4]]
+// OPT-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// OPT-NEXT: ret void
+//
+__device__ __host__ void test_three(float *a) {
+ [[clang::atomic(no_remote_memory, !no_fine_grained_memory, !ignore_denormal_mode)]] {
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ }
+}
+
+// HOST-LABEL: define dso_local void @_Z11test_nestedPf(
+// HOST-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] {
+// HOST-NEXT: [[ENTRY:.*:]]
+// HOST-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8
+// HOST-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4
+// HOST-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4
+// HOST-NEXT: [[DOTATOMICTMP1:%.*]] = alloca float, align 4
+// HOST-NEXT: [[ATOMIC_TEMP2:%.*]] = alloca float, align 4
+// HOST-NEXT: [[DOTATOMICTMP3:%.*]] = alloca float, align 4
+// HOST-NEXT: [[ATOMIC_TEMP4:%.*]] = alloca float, align 4
+// HOST-NEXT: [[DOTATOMICTMP5:%.*]] = alloca float, align 4
+// HOST-NEXT: [[ATOMIC_TEMP6:%.*]] = alloca float, align 4
+// HOST-NEXT: store ptr [[A]], ptr [[A_ADDR]], align 8
+// HOST-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR]], align 8
+// HOST-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP]], align 4
+// HOST-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP]], align 4
+// HOST-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4
+// HOST-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP]], align 4
+// HOST-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP]], align 4
+// HOST-NEXT: [[TMP4:%.*]] = load ptr, ptr [[A_ADDR]], align 8
+// HOST-NEXT: store float 2.000000e+00, ptr [[DOTATOMICTMP1]], align 4
+// HOST-NEXT: [[TMP5:%.*]] = load float, ptr [[DOTATOMICTMP1]], align 4
+// HOST-NEXT: [[TMP6:%.*]] = atomicrmw fmax ptr [[TMP4]], float [[TMP5]] seq_cst, align 4
+// HOST-NEXT: store float [[TMP6]], ptr [[ATOMIC_TEMP2]], align 4
+// HOST-NEXT: [[TMP7:%.*]] = load float, ptr [[ATOMIC_TEMP2]], align 4
+// HOST-NEXT: [[TMP8:%.*]] = load ptr, ptr [[A_ADDR]], align 8
+// HOST-NEXT: store float 3.000000e+00, ptr [[DOTATOMICTMP3]], align 4
+// HOST-NEXT: [[TMP9:%.*]] = load float, ptr [[DOTATOMICTMP3]], align 4
+// HOST-NEXT: [[TMP10:%.*]] = atomicrmw fmin ptr [[TMP8]], float [[TMP9]] acquire, align 4
+// HOST-NEXT: store float [[TMP10]], ptr [[ATOMIC_TEMP4]], align 4
+// HOST-NEXT: [[TMP11:%.*]] = load float, ptr [[ATOMIC_TEMP4]], align 4
+// HOST-NEXT: [[TMP12:%.*]] = load ptr, ptr [[A_ADDR]], align 8
+// HOST-NEXT: store float 4.000000e+00, ptr [[DOTATOMICTMP5]], align 4
+// HOST-NEXT: [[TMP13:%.*]] = load float, ptr [[DOTATOMICTMP5]], align 4
+// HOST-NEXT: [[TMP14:%.*]] = atomicrmw fsub ptr [[TMP12]], float [[TMP13]] release, align 4
+// HOST-NEXT: store float [[TMP14]], ptr [[ATOMIC_TEMP6]], align 4
+// HOST-NEXT: [[TMP15:%.*]] = load float, ptr [[ATOMIC_TEMP6]], align 4
+// HOST-NEXT: ret void
+//
+// DEV-LABEL: define dso_local void @_Z11test_nestedPf(
+// DEV-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] {
+// DEV-NEXT: [[ENTRY:.*:]]
+// DEV-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8, addrspace(5)
+// DEV-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[DOTATOMICTMP1:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[ATOMIC_TEMP2:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[DOTATOMICTMP3:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[ATOMIC_TEMP4:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[DOTATOMICTMP5:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[ATOMIC_TEMP6:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[A_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[A_ADDR]] to ptr
+// DEV-NEXT: [[DOTATOMICTMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP]] to ptr
+// DEV-NEXT: [[ATOMIC_TEMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP]] to ptr
+// DEV-NEXT: [[DOTATOMICTMP1_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP1]] to ptr
+// DEV-NEXT: [[ATOMIC_TEMP2_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP2]] to ptr
+// DEV-NEXT: [[DOTATOMICTMP3_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP3]] to ptr
+// DEV-NEXT: [[ATOMIC_TEMP4_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP4]] to ptr
+// DEV-NEXT: [[DOTATOMICTMP5_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP5]] to ptr
+// DEV-NEXT: [[ATOMIC_TEMP6_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP6]] to ptr
+// DEV-NEXT: store ptr [[A]], ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4, !amdgpu.no.fine.grained.memory [[META4]], !amdgpu.no.remote.memory [[META4]]
+// DEV-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP4:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: store float 2.000000e+00, ptr [[DOTATOMICTMP1_ASCAST]], align 4
+// DEV-NEXT: [[TMP5:%.*]] = load float, ptr [[DOTATOMICTMP1_ASCAST]], align 4
+// DEV-NEXT: [[TMP6:%.*]] = atomicrmw fmax ptr [[TMP4]], float [[TMP5]] syncscope("agent") seq_cst, align 4
+// DEV-NEXT: store float [[TMP6]], ptr [[ATOMIC_TEMP2_ASCAST]], align 4
+// DEV-NEXT: [[TMP7:%.*]] = load float, ptr [[ATOMIC_TEMP2_ASCAST]], align 4
+// DEV-NEXT: [[TMP8:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: store float 3.000000e+00, ptr [[DOTATOMICTMP3_ASCAST]], align 4
+// DEV-NEXT: [[TMP9:%.*]] = load float, ptr [[DOTATOMICTMP3_ASCAST]], align 4
+// DEV-NEXT: [[TMP10:%.*]] = atomicrmw fmin ptr [[TMP8]], float [[TMP9]] syncscope("workgroup") acquire, align 4, !amdgpu.no.remote.memory [[META4]]
+// DEV-NEXT: store float [[TMP10]], ptr [[ATOMIC_TEMP4_ASCAST]], align 4
+// DEV-NEXT: [[TMP11:%.*]] = load float, ptr [[ATOMIC_TEMP4_ASCAST]], align 4
+// DEV-NEXT: [[TMP12:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: store float 4.000000e+00, ptr [[DOTATOMICTMP5_ASCAST]], align 4
+// DEV-NEXT: [[TMP13:%.*]] = load float, ptr [[DOTATOMICTMP5_ASCAST]], align 4
+// DEV-NEXT: [[TMP14:%.*]] = atomicrmw fsub ptr [[TMP12]], float [[TMP13]] syncscope("wavefront") release, align 4, !amdgpu.no.fine.grained.memory [[META4]]
+// DEV-NEXT: store float [[TMP14]], ptr [[ATOMIC_TEMP6_ASCAST]], align 4
+// DEV-NEXT: [[TMP15:%.*]] = load float, ptr [[ATOMIC_TEMP6_ASCAST]], align 4
+// DEV-NEXT: ret void
+//
+// OPT-LABEL: define dso_local void @_Z11test_nestedPf(
+// OPT-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] {
+// OPT-NEXT: [[ENTRY:.*:]]
+// OPT-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8, addrspace(5)
+// OPT-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[DOTATOMICTMP1:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[ATOMIC_TEMP2:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[DOTATOMICTMP3:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[ATOMIC_TEMP4:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[DOTATOMICTMP5:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[ATOMIC_TEMP6:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[A_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[A_ADDR]] to ptr
+// OPT-NEXT: [[DOTATOMICTMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP]] to ptr
+// OPT-NEXT: [[ATOMIC_TEMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP]] to ptr
+// OPT-NEXT: [[DOTATOMICTMP1_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP1]] to ptr
+// OPT-NEXT: [[ATOMIC_TEMP2_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP2]] to ptr
+// OPT-NEXT: [[DOTATOMICTMP3_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP3]] to ptr
+// OPT-NEXT: [[ATOMIC_TEMP4_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP4]] to ptr
+// OPT-NEXT: [[DOTATOMICTMP5_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP5]] to ptr
+// OPT-NEXT: [[ATOMIC_TEMP6_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP6]] to ptr
+// OPT-NEXT: store ptr [[A]], ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4, !amdgpu.no.remote.memory [[META4]], !amdgpu.ignore.denormal.mode [[META4]]
+// OPT-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP4:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: store float 2.000000e+00, ptr [[DOTATOMICTMP1_ASCAST]], align 4
+// OPT-NEXT: [[TMP5:%.*]] = load float, ptr [[DOTATOMICTMP1_ASCAST]], align 4
+// OPT-NEXT: [[TMP6:%.*]] = atomicrmw fmax ptr [[TMP4]], float [[TMP5]] syncscope("agent") seq_cst, align 4
+// OPT-NEXT: store float [[TMP6]], ptr [[ATOMIC_TEMP2_ASCAST]], align 4
+// OPT-NEXT: [[TMP7:%.*]] = load float, ptr [[ATOMIC_TEMP2_ASCAST]], align 4
+// OPT-NEXT: [[TMP8:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: store float 3.000000e+00, ptr [[DOTATOMICTMP3_ASCAST]], align 4
+// OPT-NEXT: [[TMP9:%.*]] = load float, ptr [[DOTATOMICTMP3_ASCAST]], align 4
+// OPT-NEXT: [[TMP10:%.*]] = atomicrmw fmin ptr [[TMP8]], float [[TMP9]] syncscope("workgroup") acquire, align 4, !amdgpu.no.remote.memory [[META4]]
+// OPT-NEXT: store float [[TMP10]], ptr [[ATOMIC_TEMP4_ASCAST]], align 4
+// OPT-NEXT: [[TMP11:%.*]] = load float, ptr [[ATOMIC_TEMP4_ASCAST]], align 4
+// OPT-NEXT: [[TMP12:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: store float 4.000000e+00, ptr [[DOTATOMICTMP5_ASCAST]], align 4
+// OPT-NEXT: [[TMP13:%.*]] = load float, ptr [[DOTATOMICTMP5_ASCAST]], align 4
+// OPT-NEXT: [[TMP14:%.*]] = atomicrmw fsub ptr [[TMP12]], float [[TMP13]] syncscope("wavefront") release, align 4, !amdgpu.no.fine.grained.memory [[META4]]
+// OPT-NEXT: store float [[TMP14]], ptr [[ATOMIC_TEMP6_ASCAST]], align 4
+// OPT-NEXT: [[TMP15:%.*]] = load float, ptr [[ATOMIC_TEMP6_ASCAST]], align 4
+// OPT-NEXT: ret void
+//
+__device__ __host__ void test_nested(float *a) {
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ {
+ [[clang::atomic(!no_remote_memory, !no_fine_grained_memory, !ignore_denormal_mode)]] {
+ __scoped_atomic_fetch_max(a, 2, __ATOMIC_SEQ_CST, __MEMORY_SCOPE_DEVICE);
+ {
+ [[clang::atomic(no_remote_memory)]] {
+ __scoped_atomic_fetch_min(a, 3, __ATOMIC_ACQUIRE, __MEMORY_SCOPE_WRKGRP);
+ }
+ }
+ {
+ [[clang::atomic(no_fine_grained_memory)]] {
+ __scoped_atomic_fetch_sub(a, 4, __ATOMIC_RELEASE, __MEMORY_SCOPE_WVFRNT);
+ }
+ }
+ }
+ }
+}
+
+//
+// HOST-LABEL: define weak_odr void @_Z13test_templateIfEvPT_(
+// HOST-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] comdat {
+// HOST-NEXT: [[ENTRY:.*:]]
+// HOST-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8
+// HOST-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4
+// HOST-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4
+// HOST-NEXT: store ptr [[A]], ptr [[A_ADDR]], align 8
+// HOST-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR]], align 8
+// HOST-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP]], align 4
+// HOST-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP]], align 4
+// HOST-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4
+// HOST-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP]], align 4
+// HOST-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP]], align 4
+// HOST-NEXT: ret void
+//
+// DEV-LABEL: define internal void @_Z13test_templateIfEvPT_(
+// DEV-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] comdat {
+// DEV-NEXT: [[ENTRY:.*:]]
+// DEV-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8, addrspace(5)
+// DEV-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4, addrspace(5)
+// DEV-NEXT: [[A_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[A_ADDR]] to ptr
+// DEV-NEXT: [[DOTATOMICTMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP]] to ptr
+// DEV-NEXT: [[ATOMIC_TEMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP]] to ptr
+// DEV-NEXT: store ptr [[A]], ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// DEV-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4, !amdgpu.no.remote.memory [[META4]]
+// DEV-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// DEV-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// DEV-NEXT: ret void
+//
+// OPT-LABEL: define internal void @_Z13test_templateIfEvPT_(
+// OPT-SAME: ptr noundef [[A:%.*]]) #[[ATTR0]] comdat {
+// OPT-NEXT: [[ENTRY:.*:]]
+// OPT-NEXT: [[A_ADDR:%.*]] = alloca ptr, align 8, addrspace(5)
+// OPT-NEXT: [[DOTATOMICTMP:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[ATOMIC_TEMP:%.*]] = alloca float, align 4, addrspace(5)
+// OPT-NEXT: [[A_ADDR_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[A_ADDR]] to ptr
+// OPT-NEXT: [[DOTATOMICTMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[DOTATOMICTMP]] to ptr
+// OPT-NEXT: [[ATOMIC_TEMP_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[ATOMIC_TEMP]] to ptr
+// OPT-NEXT: store ptr [[A]], ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: [[TMP0:%.*]] = load ptr, ptr [[A_ADDR_ASCAST]], align 8
+// OPT-NEXT: store float 1.000000e+00, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP1:%.*]] = load float, ptr [[DOTATOMICTMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP2:%.*]] = atomicrmw fadd ptr [[TMP0]], float [[TMP1]] monotonic, align 4, !amdgpu.no.remote.memory [[META4]], !amdgpu.ignore.denormal.mode [[META4]]
+// OPT-NEXT: store float [[TMP2]], ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// OPT-NEXT: [[TMP3:%.*]] = load float, ptr [[ATOMIC_TEMP_ASCAST]], align 4
+// OPT-NEXT: ret void
+//
+template<typename T> __device__ __host__ void test_template(T *a) {
+ [[clang::atomic(no_remote_memory, !no_fine_grained_memory)]] {
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ }
+}
+
+template __device__ __host__ void test_template<float>(float *a);
+
+//.
+// DEV: [[META4]] = !{}
+//.
+// OPT: [[META4]] = !{}
+//.
diff --git a/clang/test/CodeGenOpenCL/atomic-ops.cl b/clang/test/CodeGenOpenCL/atomic-ops.cl
index 1d850261e5e813..214b3a4314222d 100644
--- a/clang/test/CodeGenOpenCL/atomic-ops.cl
+++ b/clang/test/CodeGenOpenCL/atomic-ops.cl
@@ -70,19 +70,19 @@ void test_addr(global atomic_int *ig, private atomic_int *ip, local atomic_int *
void fi3(atomic_int *i, atomic_uint *ui) {
// CHECK-LABEL: @fi3
- // CHECK: atomicrmw and ptr %{{[.0-9A-Z_a-z]+}}, i32 %{{[.0-9A-Z_a-z]+}} syncscope("workgroup") seq_cst, align 4, !noalias.addrspace [[$NOPRIVATE:![0-9]+]]{{$}}
+ // CHECK: atomicrmw and ptr %{{[.0-9A-Z_a-z]+}}, i32 %{{[.0-9A-Z_a-z]+}} syncscope("workgroup") seq_cst, align 4, !noalias.addrspace [[$NOPRIVATE:![0-9]+]], [[$DEFMD:!amdgpu.no.fine.grained.memory ![0-9]+, !amdgpu.no.remote.memory ![0-9]+$]]
int x = __opencl_atomic_fetch_and(i, 1, memory_order_seq_cst, memory_scope_work_group);
- // CHECK: atomicrmw min ptr %{{[.0-9A-Z_a-z]+}}, i32 %{{[.0-9A-Z_a-z]+}} syncscope("workgroup") seq_cst, align 4, !noalias.addrspace [[$NOPRIVATE]]{{$}}
+ // CHECK: atomicrmw min ptr %{{[.0-9A-Z_a-z]+}}, i32 %{{[.0-9A-Z_a-z]+}} syncscope("workgroup") seq_cst, align 4, !noalias.addrspace [[$NOPRIVATE]], [[$DEFMD]]
x = __opencl_atomic_fetch_min(i, 1, memory_order_seq_cst, memory_scope_work_group);
- // CHECK: atomicrmw max ptr %{{[.0-9A-Z_a-z]+}}, i32 %{{[.0-9A-Z_a-z]+}} syncscope("workgroup") seq_cst, align 4, !noalias.addrspace [[$NOPRIVATE]]{{$}}
+ // CHECK: atomicrmw max ptr %{{[.0-9A-Z_a-z]+}}, i32 %{{[.0-9A-Z_a-z]+}} syncscope("workgroup") seq_cst, align 4, !noalias.addrspace [[$NOPRIVATE]], [[$DEFMD]]
x = __opencl_atomic_fetch_max(i, 1, memory_order_seq_cst, memory_scope_work_group);
- // CHECK: atomicrmw umin ptr %{{[.0-9A-Z_a-z]+}}, i32 %{{[.0-9A-Z_a-z]+}} syncscope("workgroup") seq_cst, align 4, !noalias.addrspace [[$NOPRIVATE]]{{$}}
+ // CHECK: atomicrmw umin ptr %{{[.0-9A-Z_a-z]+}}, i32 %{{[.0-9A-Z_a-z]+}} syncscope("workgroup") seq_cst, align 4, !noalias.addrspace [[$NOPRIVATE]], [[$DEFMD]]
x = __opencl_atomic_fetch_min(ui, 1, memory_order_seq_cst, memory_scope_work_group);
- // CHECK: atomicrmw umax ptr %{{[.0-9A-Z_a-z]+}}, i32 %{{[.0-9A-Z_a-z]+}} syncscope("workgroup") seq_cst, align 4, !noalias.addrspace [[$NOPRIVATE]]{{$}}
+ // CHECK: atomicrmw umax ptr %{{[.0-9A-Z_a-z]+}}, i32 %{{[.0-9A-Z_a-z]+}} syncscope("workgroup") seq_cst, align 4, !noalias.addrspace [[$NOPRIVATE]], [[$DEFMD]]
x = __opencl_atomic_fetch_max(ui, 1, memory_order_seq_cst, memory_scope_work_group);
}
@@ -186,31 +186,31 @@ void ff2(atomic_float *d) {
float ff3(atomic_float *d) {
// CHECK-LABEL: @ff3
- // CHECK: atomicrmw xchg ptr {{.*}} syncscope("workgroup") seq_cst, align 4, !noalias.addrspace [[$NOPRIVATE]]{{$}}
+ // CHECK: atomicrmw xchg ptr {{.*}} syncscope("workgroup") seq_cst, align 4, !noalias.addrspace [[$NOPRIVATE]], [[$DEFMD]]
return __opencl_atomic_exchange(d, 2, memory_order_seq_cst, memory_scope_work_group);
}
float ff4(global atomic_float *d, float a) {
// CHECK-LABEL: @ff4
- // CHECK: atomicrmw fadd ptr addrspace(1) {{.*}} syncscope("workgroup-one-as") monotonic, align 4{{$}}
+ // CHECK: atomicrmw fadd ptr addrspace(1) {{.*}} syncscope("workgroup-one-as") monotonic, align 4, [[$DEFMD]]
return __opencl_atomic_fetch_add(d, a, memory_order_relaxed, memory_scope_work_group);
}
float ff5(global atomic_double *d, double a) {
// CHECK-LABEL: @ff5
- // CHECK: atomicrmw fadd ptr addrspace(1) {{.*}} syncscope("workgroup-one-as") monotonic, align 8{{$}}
+ // CHECK: atomicrmw fadd ptr addrspace(1) {{.*}} syncscope("workgroup-one-as") monotonic, align 8, [[$DEFMD]]
return __opencl_atomic_fetch_add(d, a, memory_order_relaxed, memory_scope_work_group);
}
float ff4_generic(atomic_float *d, float a) {
// CHECK-LABEL: @ff4_generic
- // CHECK: atomicrmw fadd ptr {{.*}} syncscope("workgroup-one-as") monotonic, align 4, !noalias.addrspace [[$NOPRIVATE]]{{$}}
+ // CHECK: atomicrmw fadd ptr {{.*}} syncscope("workgroup-one-as") monotonic, align 4, !noalias.addrspace [[$NOPRIVATE]], [[$DEFMD]]
return __opencl_atomic_fetch_add(d, a, memory_order_relaxed, memory_scope_work_group);
}
float ff5_generic(atomic_double *d, double a) {
// CHECK-LABEL: @ff5_generic
- // CHECK: atomicrmw fadd ptr {{.*}} syncscope("workgroup-one-as") monotonic, align 8, !noalias.addrspace [[$NOPRIVATE]]{{$}}
+ // CHECK: atomicrmw fadd ptr {{.*}} syncscope("workgroup-one-as") monotonic, align 8, !noalias.addrspace [[$NOPRIVATE]], [[$DEFMD]]
return __opencl_atomic_fetch_add(d, a, memory_order_relaxed, memory_scope_work_group);
}
diff --git a/clang/test/Driver/atomic-options.hip b/clang/test/Driver/atomic-options.hip
new file mode 100644
index 00000000000000..a0e1aeb3862692
--- /dev/null
+++ b/clang/test/Driver/atomic-options.hip
@@ -0,0 +1,31 @@
+// RUN: %clang -### -nogpulib -nogpuinc %s \
+// RUN: -fatomic=no_fine_grained_memory:off,no_remote_memory:on,ignore_denormal_mode:on \
+// RUN: 2>&1 | FileCheck %s --check-prefix=CHECK-VALID
+
+// CHECK-VALID: "-cc1" {{.*}}"-triple" "amdgcn-amd-amdhsa" {{.*}}"-fatomic=no_fine_grained_memory:off,no_remote_memory:on,ignore_denormal_mode:on"
+// CHECK-VALID: "-cc1" {{.*}}"-triple" {{.*}}"-fatomic=no_fine_grained_memory:off,no_remote_memory:on,ignore_denormal_mode:on"
+
+// RUN: not %clang -### -nogpulib -nogpuinc %s \
+// RUN: -fatomic=invalid_key:on 2>&1 | FileCheck %s --check-prefix=CHECK-INVALID-KEY
+
+// CHECK-INVALID-KEY: invalid argument 'invalid_key:on' to '-fatomic='; key 'invalid_key' is not allowed; allowed keys are 'no_fine_grained_memory', 'no_remote_memory', 'ignore_denormal_mode'
+
+// RUN: not %clang -### -nogpulib -nogpuinc %s \
+// RUN: -fatomic=no_fine_grained_memory:invalid 2>&1 | FileCheck %s --check-prefix=CHECK-INVALID-VALUE
+
+// CHECK-INVALID-VALUE: invalid argument 'no_fine_grained_memory:invalid' to '-fatomic='; value 'invalid' is invalid; must be boolean (true/false, 1/0, yes/no, on/off)
+
+// RUN: not %clang -### -nogpulib -nogpuinc %s \
+// RUN: -fatomic=no_fine_grained_memory 2>&1 | FileCheck %s --check-prefix=CHECK-MISSING-VALUE
+
+// CHECK-MISSING-VALUE: invalid argument 'no_fine_grained_memory' to '-fatomic='; must be a comma-separated list of key:value pairs
+
+// RUN: not %clang -### -nogpulib -nogpuinc %s \
+// RUN: -fatomic=no_fine_grained_memory:on,no_fine_grained_memory:off 2>&1 | FileCheck %s --check-prefix=CHECK-DUPLICATE-KEY
+
+// CHECK-DUPLICATE-KEY: invalid argument 'no_fine_grained_memory:off' to '-fatomic='; duplicate key 'no_fine_grained_memory'
+
+// RUN: %clang -### -nogpulib -nogpuinc %s \
+// RUN: -fatomic= 2>&1 | FileCheck %s --check-prefix=CHECK-EMPTY
+
+// CHECK-EMPTY: clang: warning: joined argument expects additional value: '-fatomic=' [-Wunused-command-line-argument]
diff --git a/clang/test/OpenMP/amdgpu-unsafe-fp-atomics.cpp b/clang/test/OpenMP/amdgpu-unsafe-fp-atomics.cpp
index 7a34113cec8fa1..60d7cb008a3689 100644
--- a/clang/test/OpenMP/amdgpu-unsafe-fp-atomics.cpp
+++ b/clang/test/OpenMP/amdgpu-unsafe-fp-atomics.cpp
@@ -11,7 +11,7 @@ double dv, dx;
// DEFAULT-SAME: ) #[[ATTR0:[0-9]+]] {
// DEFAULT-NEXT: [[ENTRY:.*:]]
// DEFAULT-NEXT: [[TMP0:%.*]] = load float, ptr addrspacecast (ptr addrspace(1) @fv to ptr), align 4
-// DEFAULT-NEXT: [[TMP1:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @fx to ptr), float [[TMP0]] monotonic, align 4
+// DEFAULT-NEXT: [[TMP1:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @fx to ptr), float [[TMP0]] monotonic, align 4, !amdgpu.no.fine.grained.memory [[META5:![0-9]+]], !amdgpu.no.remote.memory [[META5]]
// DEFAULT-NEXT: [[ADD:%.*]] = fadd float [[TMP1]], [[TMP0]]
// DEFAULT-NEXT: store float [[ADD]], ptr addrspacecast (ptr addrspace(1) @fv to ptr), align 4
// DEFAULT-NEXT: ret void
@@ -20,7 +20,7 @@ double dv, dx;
// UNSAFE-FP-ATOMICS-SAME: ) #[[ATTR0:[0-9]+]] {
// UNSAFE-FP-ATOMICS-NEXT: [[ENTRY:.*:]]
// UNSAFE-FP-ATOMICS-NEXT: [[TMP0:%.*]] = load float, ptr addrspacecast (ptr addrspace(1) @fv to ptr), align 4
-// UNSAFE-FP-ATOMICS-NEXT: [[TMP1:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @fx to ptr), float [[TMP0]] monotonic, align 4, !amdgpu.no.fine.grained.memory [[META5:![0-9]+]], !amdgpu.ignore.denormal.mode [[META5]]
+// UNSAFE-FP-ATOMICS-NEXT: [[TMP1:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @fx to ptr), float [[TMP0]] monotonic, align 4, !amdgpu.no.fine.grained.memory [[META5:![0-9]+]], !amdgpu.no.remote.memory [[META5]], !amdgpu.ignore.denormal.mode [[META5]]
// UNSAFE-FP-ATOMICS-NEXT: [[ADD:%.*]] = fadd float [[TMP1]], [[TMP0]]
// UNSAFE-FP-ATOMICS-NEXT: store float [[ADD]], ptr addrspacecast (ptr addrspace(1) @fv to ptr), align 4
// UNSAFE-FP-ATOMICS-NEXT: ret void
@@ -34,7 +34,7 @@ void atomic_fadd_f32() {
// DEFAULT-SAME: ) #[[ATTR0]] {
// DEFAULT-NEXT: [[ENTRY:.*:]]
// DEFAULT-NEXT: [[TMP0:%.*]] = load double, ptr addrspacecast (ptr addrspace(1) @dv to ptr), align 8
-// DEFAULT-NEXT: [[TMP1:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @dx to ptr), double [[TMP0]] monotonic, align 8
+// DEFAULT-NEXT: [[TMP1:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @dx to ptr), double [[TMP0]] monotonic, align 8, !amdgpu.no.fine.grained.memory [[META5]], !amdgpu.no.remote.memory [[META5]]
// DEFAULT-NEXT: [[ADD:%.*]] = fadd double [[TMP1]], [[TMP0]]
// DEFAULT-NEXT: store double [[ADD]], ptr addrspacecast (ptr addrspace(1) @dv to ptr), align 8
// DEFAULT-NEXT: ret void
@@ -43,7 +43,7 @@ void atomic_fadd_f32() {
// UNSAFE-FP-ATOMICS-SAME: ) #[[ATTR0]] {
// UNSAFE-FP-ATOMICS-NEXT: [[ENTRY:.*:]]
// UNSAFE-FP-ATOMICS-NEXT: [[TMP0:%.*]] = load double, ptr addrspacecast (ptr addrspace(1) @dv to ptr), align 8
-// UNSAFE-FP-ATOMICS-NEXT: [[TMP1:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @dx to ptr), double [[TMP0]] monotonic, align 8, !amdgpu.no.fine.grained.memory [[META5]]
+// UNSAFE-FP-ATOMICS-NEXT: [[TMP1:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @dx to ptr), double [[TMP0]] monotonic, align 8, !amdgpu.no.fine.grained.memory [[META5]], !amdgpu.no.remote.memory [[META5]]
// UNSAFE-FP-ATOMICS-NEXT: [[ADD:%.*]] = fadd double [[TMP1]], [[TMP0]]
// UNSAFE-FP-ATOMICS-NEXT: store double [[ADD]], ptr addrspacecast (ptr addrspace(1) @dv to ptr), align 8
// UNSAFE-FP-ATOMICS-NEXT: ret void
@@ -55,5 +55,7 @@ void atomic_fadd_f64() {
#pragma omp end declare target
//.
+// DEFAULT: [[META5]] = !{}
+//.
// UNSAFE-FP-ATOMICS: [[META5]] = !{}
//.
diff --git a/clang/test/Parser/Inputs/cuda.h b/clang/test/Parser/Inputs/cuda.h
new file mode 100644
index 00000000000000..405ef8bb807d90
--- /dev/null
+++ b/clang/test/Parser/Inputs/cuda.h
@@ -0,0 +1,54 @@
+/* Minimal declarations for CUDA support. Testing purposes only. */
+
+#include <stddef.h>
+
+// Make this file work with nvcc, for testing compatibility.
+
+#ifndef __NVCC__
+#define __constant__ __attribute__((constant))
+#define __device__ __attribute__((device))
+#define __global__ __attribute__((global))
+#define __host__ __attribute__((host))
+#define __shared__ __attribute__((shared))
+#define __managed__ __attribute__((managed))
+#define __launch_bounds__(...) __attribute__((launch_bounds(__VA_ARGS__)))
+
+struct dim3 {
+ unsigned x, y, z;
+ __host__ __device__ dim3(unsigned x, unsigned y = 1, unsigned z = 1) : x(x), y(y), z(z) {}
+};
+
+#ifdef __HIP__
+typedef struct hipStream *hipStream_t;
+typedef enum hipError {} hipError_t;
+int hipConfigureCall(dim3 gridSize, dim3 blockSize, size_t sharedSize = 0,
+ hipStream_t stream = 0);
+extern "C" hipError_t __hipPushCallConfiguration(dim3 gridSize, dim3 blockSize,
+ size_t sharedSize = 0,
+ hipStream_t stream = 0);
+extern "C" hipError_t hipLaunchKernel(const void *func, dim3 gridDim,
+ dim3 blockDim, void **args,
+ size_t sharedMem,
+ hipStream_t stream);
+#else
+typedef struct cudaStream *cudaStream_t;
+typedef enum cudaError {} cudaError_t;
+
+extern "C" int cudaConfigureCall(dim3 gridSize, dim3 blockSize,
+ size_t sharedSize = 0,
+ cudaStream_t stream = 0);
+extern "C" int __cudaPushCallConfiguration(dim3 gridSize, dim3 blockSize,
+ size_t sharedSize = 0,
+ cudaStream_t stream = 0);
+extern "C" cudaError_t cudaLaunchKernel(const void *func, dim3 gridDim,
+ dim3 blockDim, void **args,
+ size_t sharedMem, cudaStream_t stream);
+#endif
+
+// Host- and device-side placement new overloads.
+void *operator new(__SIZE_TYPE__, void *p) { return p; }
+void *operator new[](__SIZE_TYPE__, void *p) { return p; }
+__device__ void *operator new(__SIZE_TYPE__, void *p) { return p; }
+__device__ void *operator new[](__SIZE_TYPE__, void *p) { return p; }
+
+#endif // !__NVCC__
diff --git a/clang/test/Parser/atomic-options.hip b/clang/test/Parser/atomic-options.hip
new file mode 100644
index 00000000000000..cc802db0784c71
--- /dev/null
+++ b/clang/test/Parser/atomic-options.hip
@@ -0,0 +1,34 @@
+// RUN: %clang_cc1 -fsyntax-only -verify %s
+// RUN: %clang_cc1 -fsyntax-only -verify -fcuda-is-device %s
+// RUN: %clang_cc1 -fsyntax-only -verify -fcuda-is-device %s \
+// RUN: -fatomic=no_fine_grained_memory:off,no_remote_memory:on,ignore_denormal_mode:on
+
+#include "Inputs/cuda.h"
+
+#if !__has_extension(atomic_attributes)
+#error "We should have atomic attributes support"
+#endif
+
+[[clang::atomic(!no_remote_memory)]] // expected-error {{'atomic' attribute cannot be applied to a declaration}}
+__device__ __host__ void test_location(float *a) {
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ [[clang::atomic(!no_remote_memory)]] int x; // expected-error {{'atomic' attribute cannot be applied to a declaration}}
+}
+
+__device__ __host__ void test_invalid_option(float *a) {
+ [[clang::atomic(fast)]] { // expected-error {{invalid argument 'fast' to atomic attribute; valid options are: 'no_remote_memory', 'no_fine_grained_memory', 'ignore_denormal_mode' (optionally prefixed with '!')}}
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ }
+}
+
+__device__ __host__ void test_invalid_value(float *a) {
+ [[clang::atomic(no_remote_memory(default))]] { // expected-error {{expected ')'}} expected-note {{to match this '('}}
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ }
+}
+
+__device__ __host__ void test_invalid_format(float *a) {
+ [[clang::atomic(no_remote_memory=on)]] { // expected-error {{expected ')'}} expected-note {{to match this '('}}
+ __scoped_atomic_fetch_add(a, 1, __ATOMIC_RELAXED, __MEMORY_SCOPE_SYSTEM);
+ }
+}
More information about the cfe-commits
mailing list