[llvm] [RFC] Memory Model Relaxation Annotations (PR #78569)

Pierre van Houtryve via llvm-commits llvm-commits at lists.llvm.org
Thu Feb 8 00:10:44 PST 2024


================
@@ -698,6 +700,16 @@ bool Instruction::hasSameSpecialState(const Instruction *I2,
   assert(I1->getOpcode() == I2->getOpcode() &&
          "Can not compare special state of different instructions");
 
+  // MMRAs may change semantics of an operation, e.g. make a fence only
+  // affect a given address space.
+  //
+  // FIXME: Not sure if this stinks or not. Maybe we should just look at
+  // all callers and make them check MMRAs.
+  // OTOH, MMRAs can really alter semantics so this is technically correct
+  // (the best kind of correct).
+  if (MMRAMetadata(*this) != MMRAMetadata(*I2))
----------------
Pierre-vh wrote:

It's one of the major questions I had in the RFC, but it unfortunately didn't gather much attention on Discourse (see "Next Steps"): https://discourse.llvm.org/t/rfc-mmras-memory-model-relaxation-annotations/76361

I can see pros and cons for MD:
- (+) Metadata is less intrusive and models the optional nature of MMRAs well (= dropping them never affects correctness, unlike, say, dropping syncscope)
- (+) Even if we want to avoid combining MMRAs, it's still a well-defined process. We just want to avoid doing it aggressively  (in this case, we want to avoid doing it as part of the IR canonicalization passes such as SimplifyCFG, IC, ...)
- (-) Metadata isn't an intuitive way to model a property that can alter semantics, but there are precedents for that (e.g. !amdgpu.uniform & similar MD used to guide codegen) 
- (-) Metadata means pass writers are more likely to just drop it (or forget it), so it wouldn't surprise me if we get issues every now and then due to a commit dropping MMRAs in X or Y pass (but I don't think it'll be a very common occurrence either)

My proposal would be to start MMRAs as metadata because it's non-intrusive. After all, this is something new, and only time will tell how much use it gets. We have AMDGPU use cases that were tested internally, but we don't know if other backends will make use of them. 
We can reevaluate if needed later. 

https://github.com/llvm/llvm-project/pull/78569


More information about the llvm-commits mailing list