[llvm] r267278 - BitcodeWriter: Emit uniqued subgraphs after all distinct nodes

Duncan P. N. Exon Smith via llvm-commits llvm-commits at lists.llvm.org
Fri Apr 22 21:59:22 PDT 2016


Author: dexonsmith
Date: Fri Apr 22 23:59:22 2016
New Revision: 267278

URL: http://llvm.org/viewvc/llvm-project?rev=267278&view=rev
Log:
BitcodeWriter: Emit uniqued subgraphs after all distinct nodes

Since forward references for uniqued node operands are expensive (and
those for distinct node operands are cheap due to
DistinctMDOperandPlaceholder), minimize forward references in uniqued
node operands.

Moreover, guarantee that when a cycle is broken by a distinct node, none
of the uniqued nodes have any forward references.  In
ValueEnumerator::EnumerateMetadata, enumerate uniqued node subgraphs
first, delaying distinct nodes until all uniqued nodes have been
handled.  This guarantees that uniqued nodes only have forward
references when there is a uniquing cycle (since r267276 changed
ValueEnumerator::organizeMetadata to partition distinct nodes in front
of uniqued nodes as a post-pass).

Note that a single uniqued subgraph can hit multiple distinct nodes at
its leaves.  Ideally these would themselves be emitted in post-order,
but this commit doesn't attempt that; I think it requires an extra pass
through the edges, which I'm not convinced is worth it (since
DistinctMDOperandPlaceholder makes forward references quite cheap
between distinct nodes).

I've added two testcases:

  - test/Bitcode/mdnodes-distinct-in-post-order.ll is just like
    test/Bitcode/mdnodes-in-post-order.ll, except with distinct nodes
    instead of uniqued ones.  This confirms that, in the absence of
    uniqued nodes, distinct nodes are still emitted in post-order.

  - test/Bitcode/mdnodes-distinct-nodes-break-cycles.ll is the minimal
    example where a naive post-order traversal would cause one uniqued
    node to forward-reference another.  IOW, it's the motivating test.

Added:
    llvm/trunk/test/Bitcode/mdnodes-distinct-in-post-order.ll
    llvm/trunk/test/Bitcode/mdnodes-distinct-nodes-break-cycles.ll
Modified:
    llvm/trunk/lib/Bitcode/Writer/ValueEnumerator.cpp
    llvm/trunk/lib/Bitcode/Writer/ValueEnumerator.h

Modified: llvm/trunk/lib/Bitcode/Writer/ValueEnumerator.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Bitcode/Writer/ValueEnumerator.cpp?rev=267278&r1=267277&r2=267278&view=diff
==============================================================================
--- llvm/trunk/lib/Bitcode/Writer/ValueEnumerator.cpp (original)
+++ llvm/trunk/lib/Bitcode/Writer/ValueEnumerator.cpp Fri Apr 22 23:59:22 2016
@@ -567,6 +567,12 @@ void ValueEnumerator::dropFunctionFromMe
 }
 
 void ValueEnumerator::EnumerateMetadata(unsigned F, const Metadata *MD) {
+  // It's vital for reader efficiency that uniqued subgraphs are done in
+  // post-order; it's expensive when their operands have forward references.
+  // If a distinct node is referenced from a uniqued node, it'll be delayed
+  // until the uniqued subgraph has been completely traversed.
+  SmallVector<const MDNode *, 32> DelayedDistinctNodes;
+
   // Start by enumerating MD, and then work through its transitive operands in
   // post-order.  This requires a depth-first search.
   SmallVector<std::pair<const MDNode *, MDNode::op_iterator>, 32> Worklist;
@@ -584,7 +590,12 @@ void ValueEnumerator::EnumerateMetadata(
     if (I != N->op_end()) {
       auto *Op = cast<MDNode>(*I);
       Worklist.back().second = ++I;
-      Worklist.push_back(std::make_pair(Op, Op->op_begin()));
+
+      // Delay traversing Op if it's a distinct node and N is uniqued.
+      if (Op->isDistinct() && !N->isDistinct())
+        DelayedDistinctNodes.push_back(Op);
+      else
+        Worklist.push_back(std::make_pair(Op, Op->op_begin()));
       continue;
     }
 
@@ -592,6 +603,14 @@ void ValueEnumerator::EnumerateMetadata(
     Worklist.pop_back();
     MDs.push_back(N);
     MetadataMap[N].ID = MDs.size();
+
+    // Flush out any delayed distinct nodes; these are all the distinct nodes
+    // that are leaves in last uniqued subgraph.
+    if (Worklist.empty() || Worklist.back().first->isDistinct()) {
+      for (const MDNode *N : DelayedDistinctNodes)
+        Worklist.push_back(std::make_pair(N, N->op_begin()));
+      DelayedDistinctNodes.clear();
+    }
   }
 }
 

Modified: llvm/trunk/lib/Bitcode/Writer/ValueEnumerator.h
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Bitcode/Writer/ValueEnumerator.h?rev=267278&r1=267277&r2=267278&view=diff
==============================================================================
--- llvm/trunk/lib/Bitcode/Writer/ValueEnumerator.h (original)
+++ llvm/trunk/lib/Bitcode/Writer/ValueEnumerator.h Fri Apr 22 23:59:22 2016
@@ -256,8 +256,26 @@ private:
   const MDNode *enumerateMetadataImpl(unsigned F, const Metadata *MD);
 
   unsigned getMetadataFunctionID(const Function *F) const;
+
+  /// Enumerate reachable metadata in (almost) post-order.
+  ///
+  /// Enumerate all the metadata reachable from MD.  We want to minimize the
+  /// cost of reading bitcode records, and so the primary consideration is that
+  /// operands of uniqued nodes are resolved before the nodes are read.  This
+  /// avoids re-uniquing them on the context and factors away RAUW support.
+  ///
+  /// This algorithm guarantees that subgraphs of uniqued nodes are in
+  /// post-order.  Distinct subgraphs reachable only from a single uniqued node
+  /// will be in post-order.
+  ///
+  /// \note The relative order of a distinct and uniqued node is irrelevant.
+  /// \a organizeMetadata() will later partition distinct nodes ahead of
+  /// uniqued ones.
+  ///{
   void EnumerateMetadata(const Function *F, const Metadata *MD);
   void EnumerateMetadata(unsigned F, const Metadata *MD);
+  ///}
+
   void EnumerateFunctionLocalMetadata(const Function &F,
                                       const LocalAsMetadata *Local);
   void EnumerateFunctionLocalMetadata(unsigned F, const LocalAsMetadata *Local);

Added: llvm/trunk/test/Bitcode/mdnodes-distinct-in-post-order.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/mdnodes-distinct-in-post-order.ll?rev=267278&view=auto
==============================================================================
--- llvm/trunk/test/Bitcode/mdnodes-distinct-in-post-order.ll (added)
+++ llvm/trunk/test/Bitcode/mdnodes-distinct-in-post-order.ll Fri Apr 22 23:59:22 2016
@@ -0,0 +1,24 @@
+; RUN: llvm-as <%s | llvm-bcanalyzer -dump | FileCheck %s
+; Check that distinct nodes are emitted in post-order to avoid unnecessary
+; forward references.
+
+; Nodes in this testcase are numbered to match how they are referenced in
+; bitcode.  !3 is referenced as opN=3.
+
+; The leafs should come first (in either order).
+; CHECK:       <DISTINCT_NODE/>
+; CHECK-NEXT:  <DISTINCT_NODE/>
+!1 = distinct !{}
+!2 = distinct !{}
+
+; CHECK-NEXT:  <DISTINCT_NODE op0=1 op1=2/>
+!3 = distinct !{!1, !2}
+
+; CHECK-NEXT:  <DISTINCT_NODE op0=1 op1=3 op2=2/>
+!4 = distinct !{!1, !3, !2}
+
+; Note: named metadata nodes are not cannot reference null so their operands
+; are numbered off-by-one.
+; CHECK-NEXT:  <NAME
+; CHECK-NEXT:  <NAMED_NODE op0=3/>
+!named = !{!4}

Added: llvm/trunk/test/Bitcode/mdnodes-distinct-nodes-break-cycles.ll
URL: http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Bitcode/mdnodes-distinct-nodes-break-cycles.ll?rev=267278&view=auto
==============================================================================
--- llvm/trunk/test/Bitcode/mdnodes-distinct-nodes-break-cycles.ll (added)
+++ llvm/trunk/test/Bitcode/mdnodes-distinct-nodes-break-cycles.ll Fri Apr 22 23:59:22 2016
@@ -0,0 +1,29 @@
+; RUN: llvm-as <%s | llvm-bcanalyzer -dump | FileCheck %s
+; Check that distinct nodes break uniquing cycles, so that uniqued subgraphs
+; are always in post-order.
+;
+; It may not be immediately obvious why this is an interesting graph.  There
+; are three nodes in a cycle, and one of them (!1) is distinct.  Because the
+; entry point is !2, a naive post-order traversal would give !3, !1, !2; but
+; this means when !3 is parsed the reader will need a forward reference for !2.
+; Forward references for uniqued node operands are expensive, whereas they're
+; cheap for distinct node operands.  If the distinct node is emitted first, the
+; uniqued nodes don't need any forward references at all.
+
+; Nodes in this testcase are numbered to match how they are referenced in
+; bitcode.  !3 is referenced as opN=3.
+
+; CHECK:       <DISTINCT_NODE op0=3/>
+!1 = distinct !{!3}
+
+; CHECK-NEXT:  <NODE op0=1/>
+!2 = !{!1}
+
+; CHECK-NEXT:  <NODE op0=2/>
+!3 = !{!2}
+
+; Note: named metadata nodes are not cannot reference null so their operands
+; are numbered off-by-one.
+; CHECK-NEXT:  <NAME
+; CHECK-NEXT:  <NAMED_NODE op0=1/>
+!named = !{!2}




More information about the llvm-commits mailing list