[llvm] [MSSA] Don't check non-AA memory effects when using template access (PR #76142)

Nikita Popov via llvm-commits llvm-commits at lists.llvm.org
Thu Dec 21 03:29:08 PST 2023


https://github.com/nikic created https://github.com/llvm/llvm-project/pull/76142

Sometimes, we create a MemoryAccess for an instruction, which is later simplified (e.g. via devirtualization) such that the new instruction has no memory effects anymore.

If we later clone the instruction (e.g. during unswitching), then MSSA will not create a MemoryAccess for the new instruction, breaking MSSAU assumptions.

There are principally two ways to solve this. The first is to remove the assumption that a MemoryAccess will be created from MSSAU. I initially pursued this approach, but it is not as simple as it sounds. The core problem is that if there is another later access in the block, we'll have to find a new defining access for it. This is easy if there is another MemoryDef before the MemoryDef we're no longer creating, but becomes complicated if it was the first access in the block. In that case we might have to insert a MemoryPhi and perform the necessary rewrites, which sounds like all too much hassle for this edge-case.

The alternative implemented here is to make sure that the template access is always used if it is specified -- even if we have obtained better information in the meantime. This was already the case wrt AA queries, but the separate non-AA check was performed unconditionally beforehand. Move this check into the non-template branch only.

>From f8e06a11eec9b422e0e520d8814eccb03b06a522 Mon Sep 17 00:00:00 2001
From: Nikita Popov <npopov at redhat.com>
Date: Thu, 21 Dec 2023 12:20:20 +0100
Subject: [PATCH] [MSSA] Don't check non-AA memory effects when using template
 access

Sometimes, we create a MemoryAccess for an instruction, which is
later simplified (e.g. via devirtualization) such that the new
instruction has no memory effects anymore.

If we later clone the instruction (e.g. during unswitching), then
MSSA will not create a MemoryAccess for the new instruction,
breaking MSSAU assumptions.

There are principally two ways to solve this. The first is to
remove the assumption that a MemoryAccess will be created from
MSSAU. I initially pursued this approach, but it is not as simple
as it sounds. The core problem is that if there is another later
access in the block, we'll have to find a new defining access for
it. This is easy if there is another MemoryDef before the MemoryDef
we're no longer creating, but becomes complicated if it was the
first access in the block. In that case we might have to insert
a MemoryPhi and perform the necessary rewrites, which sounds like
all too much hassle for this edge-case.

The alternative implemented here is to make sure that the template
access is always used if it is specified -- even if we have
obtained better information in the meantime. This was already the
case wrt AA queries, but the separate non-AA check was performed
unconditionally beforehand. Move this check into the non-template
branch only.
---
 llvm/lib/Analysis/MemorySSA.cpp               |  12 +-
 .../memssa-readnone-access.ll                 | 117 ++++++++++++++++++
 2 files changed, 123 insertions(+), 6 deletions(-)
 create mode 100644 llvm/test/Transforms/SimpleLoopUnswitch/memssa-readnone-access.ll

diff --git a/llvm/lib/Analysis/MemorySSA.cpp b/llvm/lib/Analysis/MemorySSA.cpp
index 2cf92ceba01032..ce4942ea1477af 100644
--- a/llvm/lib/Analysis/MemorySSA.cpp
+++ b/llvm/lib/Analysis/MemorySSA.cpp
@@ -1732,12 +1732,6 @@ MemoryUseOrDef *MemorySSA::createNewAccess(Instruction *I,
     }
   }
 
-  // Using a nonstandard AA pipelines might leave us with unexpected modref
-  // results for I, so add a check to not model instructions that may not read
-  // from or write to memory. This is necessary for correctness.
-  if (!I->mayReadFromMemory() && !I->mayWriteToMemory())
-    return nullptr;
-
   bool Def, Use;
   if (Template) {
     Def = isa<MemoryDef>(Template);
@@ -1757,6 +1751,12 @@ MemoryUseOrDef *MemorySSA::createNewAccess(Instruction *I,
     }
 #endif
   } else {
+    // Using a nonstandard AA pipelines might leave us with unexpected modref
+    // results for I, so add a check to not model instructions that may not read
+    // from or write to memory. This is necessary for correctness.
+    if (!I->mayReadFromMemory() && !I->mayWriteToMemory())
+      return nullptr;
+
     // Find out what affect this instruction has on memory.
     ModRefInfo ModRef = AAP->getModRefInfo(I, std::nullopt);
     // The isOrdered check is used to ensure that volatiles end up as defs
diff --git a/llvm/test/Transforms/SimpleLoopUnswitch/memssa-readnone-access.ll b/llvm/test/Transforms/SimpleLoopUnswitch/memssa-readnone-access.ll
new file mode 100644
index 00000000000000..2aaf777683e116
--- /dev/null
+++ b/llvm/test/Transforms/SimpleLoopUnswitch/memssa-readnone-access.ll
@@ -0,0 +1,117 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 4
+; RUN: opt -S -passes="loop-mssa(loop-instsimplify,simple-loop-unswitch<nontrivial>)" < %s | FileCheck %s
+
+ at vtable = constant ptr @foo
+
+declare void @foo() memory(none)
+declare void @bar()
+
+; The call becomes known readnone after simplification, but still have a
+; MemoryAccess. Make sure this does not lead to an assertion failure.
+define void @test(i1 %c) {
+; CHECK-LABEL: define void @test(
+; CHECK-SAME: i1 [[C:%.*]]) {
+; CHECK-NEXT:    [[C_FR:%.*]] = freeze i1 [[C]]
+; CHECK-NEXT:    br i1 [[C_FR]], label [[DOTSPLIT_US:%.*]], label [[DOTSPLIT:%.*]]
+; CHECK:       .split.us:
+; CHECK-NEXT:    br label [[LOOP_US:%.*]]
+; CHECK:       loop.us:
+; CHECK-NEXT:    call void @foo()
+; CHECK-NEXT:    br label [[EXIT_SPLIT_US:%.*]]
+; CHECK:       exit.split.us:
+; CHECK-NEXT:    br label [[EXIT:%.*]]
+; CHECK:       .split:
+; CHECK-NEXT:    br label [[LOOP:%.*]]
+; CHECK:       loop:
+; CHECK-NEXT:    call void @foo()
+; CHECK-NEXT:    br label [[LOOP]]
+; CHECK:       exit:
+; CHECK-NEXT:    ret void
+;
+  br label %loop
+
+loop:
+  %fn = load ptr, ptr @vtable, align 8
+  call void %fn()
+  br i1 %c, label %exit, label %loop
+
+exit:
+  ret void
+}
+
+; Variant with another access after the call.
+define void @test2(i1 %c, ptr %p) {
+; CHECK-LABEL: define void @test2(
+; CHECK-SAME: i1 [[C:%.*]], ptr [[P:%.*]]) {
+; CHECK-NEXT:    [[C_FR:%.*]] = freeze i1 [[C]]
+; CHECK-NEXT:    br i1 [[C_FR]], label [[DOTSPLIT_US:%.*]], label [[DOTSPLIT:%.*]]
+; CHECK:       .split.us:
+; CHECK-NEXT:    br label [[LOOP_US:%.*]]
+; CHECK:       loop.us:
+; CHECK-NEXT:    call void @foo()
+; CHECK-NEXT:    call void @bar()
+; CHECK-NEXT:    br label [[EXIT_SPLIT_US:%.*]]
+; CHECK:       exit.split.us:
+; CHECK-NEXT:    br label [[EXIT:%.*]]
+; CHECK:       .split:
+; CHECK-NEXT:    br label [[LOOP:%.*]]
+; CHECK:       loop:
+; CHECK-NEXT:    call void @foo()
+; CHECK-NEXT:    call void @bar()
+; CHECK-NEXT:    br label [[LOOP]]
+; CHECK:       exit:
+; CHECK-NEXT:    ret void
+;
+  br label %loop
+
+loop:
+  %fn = load ptr, ptr @vtable, align 8
+  call void %fn()
+  call void @bar()
+  br i1 %c, label %exit, label %loop
+
+exit:
+  ret void
+}
+
+; Variant with another access after the call and no access before the call.
+define void @test3(i1 %c, ptr %p) {
+; CHECK-LABEL: define void @test3(
+; CHECK-SAME: i1 [[C:%.*]], ptr [[P:%.*]]) {
+; CHECK-NEXT:    [[C_FR:%.*]] = freeze i1 [[C]]
+; CHECK-NEXT:    br i1 [[C_FR]], label [[DOTSPLIT_US:%.*]], label [[DOTSPLIT:%.*]]
+; CHECK:       .split.us:
+; CHECK-NEXT:    br label [[LOOP_US:%.*]]
+; CHECK:       loop.us:
+; CHECK-NEXT:    br label [[SPLIT_US:%.*]]
+; CHECK:       split.us:
+; CHECK-NEXT:    call void @foo()
+; CHECK-NEXT:    call void @bar()
+; CHECK-NEXT:    br label [[EXIT_SPLIT_US:%.*]]
+; CHECK:       exit.split.us:
+; CHECK-NEXT:    br label [[EXIT:%.*]]
+; CHECK:       .split:
+; CHECK-NEXT:    br label [[LOOP:%.*]]
+; CHECK:       loop:
+; CHECK-NEXT:    br label [[SPLIT:%.*]]
+; CHECK:       split:
+; CHECK-NEXT:    call void @foo()
+; CHECK-NEXT:    call void @bar()
+; CHECK-NEXT:    br label [[LOOP]]
+; CHECK:       exit:
+; CHECK-NEXT:    ret void
+;
+  br label %loop
+
+loop:
+  %fn = load ptr, ptr @vtable, align 8
+  br label %split
+
+split:
+  call void %fn()
+  call void @bar()
+  br i1 %c, label %exit, label %loop
+
+exit:
+  ret void
+}



More information about the llvm-commits mailing list