[llvm] [RFC] Memory Model Relaxation Annotations (PR #78569)

Sameer Sahasrabuddhe via llvm-commits llvm-commits at lists.llvm.org
Tue Mar 12 02:33:36 PDT 2024


ssahasra wrote:

> I believe the best thing to do to preserve MMRA is to require a strict equality, instead of combining. At least for common operations like hoisting/simplifyCFG. However this may be a heavy price to pay because those are core optimizations and inhibiting them too much could eventually be problematic.

I know the comment I am quoting is kinda stale, but just wanted to discuss how to approach "lost optimizations" when doing something new like this. The principal guideline is "safe by default" ... it's okay to lose optimizations and restore them, but it's not okay to mistakenly allow an optimization where it shouldn't be. So strict equality is good enough in my books while we work everything out. I am not saying we should remove what is already done right, but just trying to close this question.

Also another interesting thing about the recent comments from @nikic ... it seems the main objection is that we should not be too eager to handle MMRAs in transforms where they have little impact. Evaluating this with the "safe by default" lens, a transform that merges MMRAs where it shouldn't cannot hamper correctness, so this is okay. Going in and teaching that transform about MMRAs is an optimization. There is an opposite "safe by default" choice, which is to say that we will teach every conceivable transform to back off when MMRAs are present, and then re-enable each of them by teaching them how to deal with MMRAs. That sounds like overkill. The first choice is much more practical, and it validates the semantics that we chose for MMRAs ... they can be arbitrary sets of tags, because we cannot control who produced them, but the backend must be conservative when consuming them.

https://github.com/llvm/llvm-project/pull/78569


More information about the llvm-commits mailing list