[llvm] [RFC] Memory Model Relaxation Annotations (PR #78569)

Sameer Sahasrabuddhe via llvm-commits llvm-commits at lists.llvm.org
Wed Feb 7 21:18:40 PST 2024


================
@@ -698,6 +700,16 @@ bool Instruction::hasSameSpecialState(const Instruction *I2,
   assert(I1->getOpcode() == I2->getOpcode() &&
          "Can not compare special state of different instructions");
 
+  // MMRAs may change semantics of an operation, e.g. make a fence only
+  // affect a given address space.
+  //
+  // FIXME: Not sure if this stinks or not. Maybe we should just look at
+  // all callers and make them check MMRAs.
+  // OTOH, MMRAs can really alter semantics so this is technically correct
+  // (the best kind of correct).
+  if (MMRAMetadata(*this) != MMRAMetadata(*I2))
----------------
ssahasra wrote:

Merging MMRAs can be very bad for performance. For example, on AMDGPU, vulkan:private is supposed to be cached and vulkan:nonprivate is supposed to bypass the cache. If a merge results in both tags, then the backend chooses correctness over performance, which means the resulting operation is not cached. So it seems best if an optimization only merges MMRAs when it understand the impact. To continue the Vulkan example, If two loads are being coalesced into a wider load, and one of them is private while the other is nonprivate, then it's best not to coalesce them.

https://github.com/llvm/llvm-project/pull/78569


More information about the llvm-commits mailing list