[llvm] r227112 - Refine memory dependence's notion of volatile semantics

Philip Reames listmail at philipreames.com
Mon Jan 26 10:54:27 PST 2015


Author: reames
Date: Mon Jan 26 12:54:27 2015
New Revision: 227112

URL: http://llvm.org/viewvc/llvm-project?rev=227112&view=rev
Log:
Refine memory dependence's notion of volatile semantics

According to my reading of the LangRef, volatiles are only ordered with respect to other volatiles. It is entirely legal and profitable to forward unrelated loads over the volatile load. This patch implements this for GVN by refining the transition rules MemoryDependenceAnalysis uses when encountering a volatile.

The added test cases show where the extra flexibility is profitable for local dependence optimizations. I have a related change (227110) which will extend this to non-local dependence (i.e. PRE), but that's essentially orthogonal to the semantic change in this patch. I have tested the two together and can confirm that PRE works over a volatile load with both changes.  I will be submitting a PRE w/volatiles test case seperately in the near future.

Differential Revision: http://reviews.llvm.org/D6901


Added:
    llvm/trunk/test/Transforms/GVN/volatile.ll
Modified:
    llvm/trunk/lib/Analysis/MemoryDependenceAnalysis.cpp

Modified: llvm/trunk/lib/Analysis/MemoryDependenceAnalysis.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Analysis/MemoryDependenceAnalysis.cpp?rev=227112&r1=227111&r2=227112&view=diff
==============================================================================
--- llvm/trunk/lib/Analysis/MemoryDependenceAnalysis.cpp (original)
+++ llvm/trunk/lib/Analysis/MemoryDependenceAnalysis.cpp Mon Jan 26 12:54:27 2015
@@ -362,6 +362,17 @@ getLoadLoadClobberFullWidthSize(const Va
   }
 }
 
+static bool isVolatile(Instruction *Inst) {
+  if (LoadInst *LI = dyn_cast<LoadInst>(Inst))
+    return LI->isVolatile();
+  else if (StoreInst *SI = dyn_cast<StoreInst>(Inst))
+    return SI->isVolatile();
+  else if (AtomicCmpXchgInst *AI = dyn_cast<AtomicCmpXchgInst>(Inst))
+    return AI->isVolatile();
+  return false;
+}
+
+
 /// getPointerDependencyFrom - Return the instruction on which a memory
 /// location depends.  If isLoad is true, this routine ignores may-aliases with
 /// read-only operations.  If isLoad is false, this routine ignores may-aliases
@@ -448,12 +459,26 @@ getPointerDependencyFrom(const AliasAnal
     // does not alias with when this atomic load indicates that another thread may
     // be accessing the location.
     if (LoadInst *LI = dyn_cast<LoadInst>(Inst)) {
+
+      // While volatile access cannot be eliminated, they do not have to clobber
+      // non-aliasing locations, as normal accesses, for example, can be safely
+      // reordered with volatile accesses.
+      if (LI->isVolatile()) {
+        if (!QueryInst)
+          // Original QueryInst *may* be volatile
+          return MemDepResult::getClobber(LI);
+        if (isVolatile(QueryInst))
+          // Ordering required if QueryInst is itself volatile
+          return MemDepResult::getClobber(LI);
+        // Otherwise, volatile doesn't imply any special ordering
+      }
+      
       // Atomic loads have complications involved.
       // A Monotonic (or higher) load is OK if the query inst is itself not atomic.
       // An Acquire (or higher) load sets the HasSeenAcquire flag, so that any
       //   release store will know to return getClobber.
       // FIXME: This is overly conservative.
-      if (!LI->isUnordered()) {
+      if (LI->isAtomic() && LI->getOrdering() > Unordered) {
         if (!QueryInst)
           return MemDepResult::getClobber(LI);
         if (auto *QueryLI = dyn_cast<LoadInst>(QueryInst)) {
@@ -470,13 +495,6 @@ getPointerDependencyFrom(const AliasAnal
           HasSeenAcquire = true;
       }
 
-      // FIXME: this is overly conservative.
-      // While volatile access cannot be eliminated, they do not have to clobber
-      // non-aliasing locations, as normal accesses can for example be reordered
-      // with volatile accesses.
-      if (LI->isVolatile())
-        return MemDepResult::getClobber(LI);
-
       AliasAnalysis::Location LoadLoc = AA->getLocation(LI);
 
       // If we found a pointer, check if it could be the same as our pointer.
@@ -890,14 +908,7 @@ getNonLocalPointerDependency(Instruction
   // Doing so would require piping through the QueryInst all the way through.
   // TODO: volatiles can't be elided, but they can be reordered with other
   // non-volatile accesses.
-  auto isVolatile = [](Instruction *Inst) {
-    if (LoadInst *LI = dyn_cast<LoadInst>(Inst)) {
-      return LI->isVolatile();
-    } else if (StoreInst *SI = dyn_cast<StoreInst>(Inst)) {
-      return SI->isVolatile();
-    }
-    return false;
-  };
+
   // We currently give up on any instruction which is ordered, but we do handle
   // atomic instructions which are unordered.
   // TODO: Handle ordered instructions

Added: llvm/trunk/test/Transforms/GVN/volatile.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Transforms/GVN/volatile.ll?rev=227112&view=auto
==============================================================================
--- llvm/trunk/test/Transforms/GVN/volatile.ll (added)
+++ llvm/trunk/test/Transforms/GVN/volatile.ll Mon Jan 26 12:54:27 2015
@@ -0,0 +1,75 @@
+; Tests that check our handling of volatile instructions encountered
+; when scanning for dependencies
+; RUN: opt -basicaa -gvn -S < %s | FileCheck %s
+
+; Check that we can bypass a volatile load when searching
+; for dependencies of a non-volatile load
+define i32 @test1(i32* nocapture %p, i32* nocapture %q) {
+; CHECK-LABEL: test1
+; CHECK:      %0 = load volatile i32* %q
+; CHECK-NEXT: ret i32 0
+entry:
+  %x = load i32* %p
+  load volatile i32* %q
+  %y = load i32* %p
+  %add = sub i32 %y, %x
+  ret i32 %add
+}
+
+; We can not value forward if the query instruction is 
+; volatile, this would be (in effect) removing the volatile load
+define i32 @test2(i32* nocapture %p, i32* nocapture %q) {
+; CHECK-LABEL: test2
+; CHECK:      %x = load i32* %p
+; CHECK-NEXT: %y = load volatile i32* %p
+; CHECK-NEXT: %add = sub i32 %y, %x
+entry:
+  %x = load i32* %p
+  %y = load volatile i32* %p
+  %add = sub i32 %y, %x
+  ret i32 %add
+}
+
+; If the query instruction is itself volatile, we *cannot*
+; reorder it even if p and q are noalias
+define i32 @test3(i32* noalias nocapture %p, i32* noalias nocapture %q) {
+; CHECK-LABEL: test3
+; CHECK:      %x = load i32* %p
+; CHECK-NEXT: %0 = load volatile i32* %q
+; CHECK-NEXT: %y = load volatile i32* %p
+entry:
+  %x = load i32* %p
+  load volatile i32* %q
+  %y = load volatile i32* %p
+  %add = sub i32 %y, %x
+  ret i32 %add
+}
+
+; If an encountered instruction is both volatile and ordered, 
+; we need to use the strictest ordering of either.  In this 
+; case, the ordering prevents forwarding.
+define i32 @test4(i32* noalias nocapture %p, i32* noalias nocapture %q) {
+; CHECK-LABEL: test4
+; CHECK:      %x = load i32* %p
+; CHECK-NEXT: %0 = load atomic volatile i32* %q seq_cst 
+; CHECK-NEXT: %y = load atomic i32* %p seq_cst
+entry:
+  %x = load i32* %p
+  load atomic volatile i32* %q seq_cst, align 4
+  %y = load atomic i32* %p seq_cst, align 4
+  %add = sub i32 %y, %x
+  ret i32 %add
+}
+
+; Value forwarding from a volatile load is perfectly legal
+define i32 @test5(i32* nocapture %p, i32* nocapture %q) {
+; CHECK-LABEL: test5
+; CHECK:      %x = load volatile i32* %p
+; CHECK-NEXT: ret i32 0
+entry:
+  %x = load volatile i32* %p
+  %y = load i32* %p
+  %add = sub i32 %y, %x
+  ret i32 %add
+}
+





More information about the llvm-commits mailing list