[PATCH] D44497: [EarlyCSE] Reuse invariant scopes for invariant load

Philip Reames via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Mar 14 15:02:02 PDT 2018


reames created this revision.
reames added reviewers: anna, hfinkel.
Herald added subscribers: bollu, mcrosier.

This is a follow up to https://reviews.llvm.org/D43716 which rewrites the invariant load handling using the new infrastructure.  It's slightly more powerful, but only in somewhat minor ways for the moment.  It's not clear that DSE of stores to invariant locations is actually interesting since why would your IR have such a construct to start with?


Repository:
  rL LLVM

https://reviews.llvm.org/D44497

Files:
  lib/Transforms/Scalar/EarlyCSE.cpp
  test/Transforms/EarlyCSE/invariant-loads.ll


Index: test/Transforms/EarlyCSE/invariant-loads.ll
===================================================================
--- test/Transforms/EarlyCSE/invariant-loads.ll
+++ test/Transforms/EarlyCSE/invariant-loads.ll
@@ -95,11 +95,25 @@
   ret void
 }
 
-define void @test_false_negative_dse(i32* %p, i1 %cnd) {
-; CHECK-LABEL: @test_false_negative_dse
-; CHECK: store
+; By assumption, the call can't change contents of p
+; LangRef is a bit unclear about whether the store is reachable, so
+; for the moment we chose to be conservative and just assume it's valid
+; to restore the same unchanging value.
+define void @test_dse1(i32* %p) {
+; CHECK-LABEL: @test_dse1
+; CHECK-NOT: store
   %v1 = load i32, i32* %p, !invariant.load !{}
   call void @clobber_and_use(i32 %v1)
   store i32 %v1, i32* %p
   ret void
 }
+
+; By assumption, v1 must equal v2 (TODO)
+define void @test_false_negative_dse2(i32* %p, i32 %v2) {
+; CHECK-LABEL: @test_false_negative_dse2
+; CHECK-NOT: store
+  %v1 = load i32, i32* %p, !invariant.load !{}
+  call void @clobber_and_use(i32 %v1)
+  store i32 %v2, i32* %p
+  ret void
+}
Index: lib/Transforms/Scalar/EarlyCSE.cpp
===================================================================
--- lib/Transforms/Scalar/EarlyCSE.cpp
+++ lib/Transforms/Scalar/EarlyCSE.cpp
@@ -357,15 +357,11 @@
     int MatchingId = -1;
     bool IsAtomic = false;
 
-    // TODO: Remove this flag.  It would be strictly stronger to add a record
-    // to the AvailableInvariant table when passing the invariant load instead.
-    bool IsInvariant = false;
-
     LoadValue() = default;
     LoadValue(Instruction *Inst, unsigned Generation, unsigned MatchingId,
-              bool IsAtomic, bool IsInvariant)
+              bool IsAtomic)
         : DefInst(Inst), Generation(Generation), MatchingId(MatchingId),
-          IsAtomic(IsAtomic), IsInvariant(IsInvariant) {}
+          IsAtomic(IsAtomic) {}
   };
 
   using LoadMapAllocator =
@@ -903,8 +899,7 @@
           !MemInst.isVolatile() && MemInst.isUnordered() &&
           // We can't replace an atomic load with one which isn't also atomic.
           InVal.IsAtomic >= MemInst.isAtomic() &&
-          (InVal.IsInvariant ||
-           isOperatingOnInvariantMemAt(Inst, InVal.Generation) ||
+          (isOperatingOnInvariantMemAt(Inst, InVal.Generation) ||
            isSameMemGeneration(InVal.Generation, CurrentGeneration,
                                InVal.DefInst, Inst))) {
         Value *Op = getOrCreateResult(InVal.DefInst, Inst->getType());
@@ -925,7 +920,14 @@
       AvailableLoads.insert(
           MemInst.getPointerOperand(),
           LoadValue(Inst, CurrentGeneration, MemInst.getMatchingId(),
-                    MemInst.isAtomic(), MemInst.isInvariantLoad()));
+                    MemInst.isAtomic()));
+      if (MemInst.isInvariantLoad()) {
+        // If we pass an invariant load, we know that memory location is
+        // indefinitely constant from the moment of first dereferenceability.
+        // We conservative treat the invariant_load as that moment.
+        auto MemLoc = MemoryLocation::get(Inst);
+        AvailableInvariants.insert(MemLoc, CurrentGeneration);
+      }
       LastStore = nullptr;
       continue;
     }
@@ -1050,7 +1052,7 @@
         AvailableLoads.insert(
             MemInst.getPointerOperand(),
             LoadValue(Inst, CurrentGeneration, MemInst.getMatchingId(),
-                      MemInst.isAtomic(), /*IsInvariant=*/false));
+                      MemInst.isAtomic()));
 
         // Remember that this was the last unordered store we saw for DSE. We
         // don't yet handle DSE on ordered or volatile stores since we don't


-------------- next part --------------
A non-text attachment was scrubbed...
Name: D44497.138459.patch
Type: text/x-patch
Size: 3680 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20180314/59a44c09/attachment.bin>


More information about the llvm-commits mailing list