[llvm] r310450 - [LCG] Switch one of the update methods for the LazyCallGraph to support
Yung, Douglas via llvm-commits
llvm-commits at lists.llvm.org
Wed Aug 9 17:51:11 PDT 2017
Hi Chandler,
I don't know if you are aware, but this commit seems to have caused the following 11 tests to start failing on one of the bots (http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-win/builds/4201):
LLVM-Unit :: Analysis/Checking/./AnalysisTests.exe/LazyCallGraphTest.InlineAndDeleteFunction
LLVM-Unit :: Analysis/Checking/./AnalysisTests.exe/LazyCallGraphTest.InternalEdgeRemoval
LLVM-Unit :: Analysis/Checking/./AnalysisTests.exe/LazyCallGraphTest.InternalMultiEdgeRemoval
LLVM :: Other/cgscc-devirt-iteration.ll
LLVM :: Other/cgscc-iterate-function-mutation.ll
LLVM :: Transforms/Inline/cgscc-incremental-invalidate.ll
LLVM :: Transforms/Inline/cgscc-invalidate.ll
LLVM :: Transforms/Inline/cgscc-update.ll
LLVM :: Transforms/Inline/frameescape.ll
LLVM :: Transforms/Inline/internal-scc-members.ll
LLVM :: Transforms/Inline/monster_scc.ll
In each case, the test fails with an assertion failure:
Assertion failed: G && "Can't have a null graph!", file C:\ps4-buildslave2\llvm-clang-x86_64-expensive-checks-win\llvm\lib\Analysis\LazyCallGraph.cpp, line 276
Can you take a look?
Douglas Yung
> -----Original Message-----
> From: llvm-commits [mailto:llvm-commits-bounces at lists.llvm.org] On Behalf Of
> Chandler Carruth via llvm-commits
> Sent: Wednesday, August 09, 2017 2:05
> To: llvm-commits at lists.llvm.org
> Subject: [llvm] r310450 - [LCG] Switch one of the update methods for the
> LazyCallGraph to support
>
> Author: chandlerc
> Date: Wed Aug 9 02:05:27 2017
> New Revision: 310450
>
> URL: http://llvm.org/viewvc/llvm-project?rev=310450&view=rev
> Log:
> [LCG] Switch one of the update methods for the LazyCallGraph to support
> limited batch updates.
>
> Specifically, allow removing multiple reference edges starting from a common
> source node. There are a few constraints that play into supporting this form
> of batching:
>
> 1) The way updates occur during the CGSCC walk, about the most we can
> functionally batch together are those with a common source node. This
> also makes the batching simpler to implement, so it seems
> a worthwhile restriction.
> 2) The far and away hottest function for large C++ files I measured
> (generated code for protocol buffers) showed a huge amount of time
> was spent removing ref edges specifically, so it seems worth focusing
> there.
> 3) The algorithm for removing ref edges is very amenable to this
> restricted batching. There are just both API and implementation
> special casing for the non-batch case that gets in the way. Once
> removed, supporting batches is nearly trivial.
>
> This does modify the API in an interesting way -- now, we only preserve the
> target RefSCC when the RefSCC structure is unchanged. In the face of any
> splits, we create brand new RefSCC objects. However, all of the users were OK
> with it that I could find. Only the unittest needed interesting updates here.
>
> How much does batching these updates help? I instrumented the compiler when
> run over a very large generated source file for a protocol buffer and found
> that the majority of updates are intrinsically updating one function at a
> time. However, nearly 40% of the total ref edges removed are removed as part
> of a batch of removals greater than one, so these are the cases batching can
> help with.
>
> When compiling the IR for this file with 'opt' and 'O3', this patch reduces
> the total time by 8-9%.
>
> Differential Revision: https://reviews.llvm.org/D36352
>
> Modified:
> llvm/trunk/include/llvm/Analysis/LazyCallGraph.h
> llvm/trunk/lib/Analysis/CGSCCPassManager.cpp
> llvm/trunk/lib/Analysis/LazyCallGraph.cpp
> llvm/trunk/unittests/Analysis/LazyCallGraphTest.cpp
>
> Modified: llvm/trunk/include/llvm/Analysis/LazyCallGraph.h
> URL: http://llvm.org/viewvc/llvm-
> project/llvm/trunk/include/llvm/Analysis/LazyCallGraph.h?rev=310450&r1=310449&
> r2=310450&view=diff
> ==============================================================================
> --- llvm/trunk/include/llvm/Analysis/LazyCallGraph.h (original)
> +++ llvm/trunk/include/llvm/Analysis/LazyCallGraph.h Wed Aug 9 02:05:27
> +++ 2017
> @@ -795,26 +795,25 @@ public:
> /// though, so be careful calling this while iterating over them.
> void removeOutgoingEdge(Node &SourceN, Node &TargetN);
>
> - /// Remove a ref edge which is entirely within this RefSCC.
> + /// Remove a list of ref edges which are entirely within this RefSCC.
> ///
> - /// Both the \a SourceN and the \a TargetN must be within this RefSCC.
> - /// Removing such an edge may break cycles that form this RefSCC and thus
> - /// this operation may change the RefSCC graph significantly. In
> + /// Both the \a SourceN and all of the \a TargetNs must be within this
> + /// RefSCC. Removing these edges may break cycles that form this RefSCC
> and
> + /// thus this operation may change the RefSCC graph significantly.
> + In
> /// particular, this operation will re-form new RefSCCs based on the
> /// remaining connectivity of the graph. The following invariants are
> /// guaranteed to hold after calling this method:
> ///
> - /// 1) This RefSCC is still a RefSCC in the graph.
> - /// 2) This RefSCC will be the parent of any new RefSCCs. Thus, this
> RefSCC
> - /// is preserved as the root of any new RefSCC DAG formed.
> - /// 3) No RefSCC other than this RefSCC has its member set changed (this
> is
> + /// 1) If a ref-cycle remains after removal, it leaves this RefSCC intact
> + /// and in the graph. No new RefSCCs are built.
> + /// 2) Otherwise, this RefSCC will be dead after this call and no longer
> in
> + /// the graph or the postorder traversal of the call graph. Any
> iterator
> + /// pointing at this RefSCC will become invalid.
> + /// 3) All newly formed RefSCCs will be returned and the order of the
> + /// RefSCCs returned will be a valid postorder traversal of the new
> + /// RefSCCs.
> + /// 4) No RefSCC other than this RefSCC has its member set changed
> + (this is
> /// inherent in the definition of removing such an edge).
> - /// 4) All of the parent links of the RefSCC graph will be updated to
> - /// reflect the new RefSCC structure.
> - /// 5) All RefSCCs formed out of this RefSCC, excluding this RefSCC, will
> - /// be returned in post-order.
> - /// 6) The order of the RefSCCs in the vector will be a valid postorder
> - /// traversal of the new RefSCCs.
> ///
> /// These invariants are very important to ensure that we can build
> /// optimization pipelines on top of the CGSCC pass manager which @@ -
> 833,11 +832,9 @@ public:
> /// within this RefSCC and edges from this RefSCC to child RefSCCs. Some
> /// effort has been made to minimize the overhead of common cases such as
> /// self-edges and edge removals which result in a spanning tree with no
> - /// more cycles. There are also detailed comments within the
> implementation
> - /// on techniques which could substantially improve this routine's
> - /// efficiency.
> + /// more cycles.
> SmallVector<RefSCC *, 1> removeInternalRefEdge(Node &SourceN,
> - Node &TargetN);
> + ArrayRef<Node *>
> + TargetNs);
>
> /// A convenience wrapper around the above to handle trivial cases of
> /// inserting a new call edge.
>
> Modified: llvm/trunk/lib/Analysis/CGSCCPassManager.cpp
> URL: http://llvm.org/viewvc/llvm-
> project/llvm/trunk/lib/Analysis/CGSCCPassManager.cpp?rev=310450&r1=310449&r2=3
> 10450&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Analysis/CGSCCPassManager.cpp (original)
> +++ llvm/trunk/lib/Analysis/CGSCCPassManager.cpp Wed Aug 9 02:05:27
> +++ 2017
> @@ -459,71 +459,78 @@ LazyCallGraph::SCC &llvm::updateCGAndAna
> VisitRef(*F);
>
> // First remove all of the edges that are no longer present in this
> function.
> - // We have to build a list of dead targets first and then remove them as
> the
> - // data structures will all be invalidated by removing them.
> - SmallVector<PointerIntPair<Node *, 1, Edge::Kind>, 4> DeadTargets;
> - for (Edge &E : *N)
> - if (!RetainedEdges.count(&E.getNode()))
> - DeadTargets.push_back({&E.getNode(), E.getKind()});
> - for (auto DeadTarget : DeadTargets) {
> - Node &TargetN = *DeadTarget.getPointer();
> - bool IsCall = DeadTarget.getInt() == Edge::Call;
> - SCC &TargetC = *G.lookupSCC(TargetN);
> - RefSCC &TargetRC = TargetC.getOuterRefSCC();
> -
> - if (&TargetRC != RC) {
> - RC->removeOutgoingEdge(N, TargetN);
> - if (DebugLogging)
> - dbgs() << "Deleting outgoing edge from '" << N << "' to '" << TargetN
> - << "'\n";
> + // The first step makes these edges uniformly ref edges and
> + accumulates them // into a separate data structure so removal doesn't
> invalidate anything.
> + SmallVector<Node *, 4> DeadTargets;
> + for (Edge &E : *N) {
> + if (RetainedEdges.count(&E.getNode()))
> continue;
> - }
> - if (DebugLogging)
> - dbgs() << "Deleting internal " << (IsCall ? "call" : "ref")
> - << " edge from '" << N << "' to '" << TargetN << "'\n";
>
> - if (IsCall) {
> + SCC &TargetC = *G.lookupSCC(E.getNode());
> + RefSCC &TargetRC = TargetC.getOuterRefSCC();
> + if (&TargetRC == RC && E.isCall()) {
> if (C != &TargetC) {
> // For separate SCCs this is trivial.
> - RC->switchTrivialInternalEdgeToRef(N, TargetN);
> + RC->switchTrivialInternalEdgeToRef(N, E.getNode());
> } else {
> // Now update the call graph.
> - C = incorporateNewSCCRange(RC->switchInternalEdgeToRef(N, TargetN),
> G,
> - N, C, AM, UR, DebugLogging);
> + C = incorporateNewSCCRange(RC->switchInternalEdgeToRef(N,
> E.getNode()),
> + G, N, C, AM, UR, DebugLogging);
> }
> }
>
> - auto NewRefSCCs = RC->removeInternalRefEdge(N, TargetN);
> - if (!NewRefSCCs.empty()) {
> - // Note that we don't bother to invalidate analyses as ref-edge
> - // connectivity is not really observable in any way and is intended
> - // exclusively to be used for ordering of transforms rather than for
> - // analysis conclusions.
> -
> - // The RC worklist is in reverse postorder, so we first enqueue the
> - // current RefSCC as it will remain the parent of all split RefSCCs,
> then
> - // we enqueue the new ones in RPO except for the one which contains the
> - // source node as that is the "bottom" we will continue processing in
> the
> - // bottom-up walk.
> - UR.RCWorklist.insert(RC);
> + // Now that this is ready for actual removal, put it into our list.
> + DeadTargets.push_back(&E.getNode());
> + }
> + // Remove the easy cases quickly and actually pull them out of our list.
> + DeadTargets.erase(
> + llvm::remove_if(DeadTargets,
> + [&](Node *TargetN) {
> + SCC &TargetC = *G.lookupSCC(*TargetN);
> + RefSCC &TargetRC = TargetC.getOuterRefSCC();
> +
> + // We can't trivially remove internal targets, so
> skip
> + // those.
> + if (&TargetRC == RC)
> + return false;
> +
> + RC->removeOutgoingEdge(N, *TargetN);
> + if (DebugLogging)
> + dbgs() << "Deleting outgoing edge from '" << N
> + << "' to '" << TargetN << "'\n";
> + return true;
> + }),
> + DeadTargets.end());
> +
> + // Now do a batch removal of the internal ref edges left.
> + auto NewRefSCCs = RC->removeInternalRefEdge(N, DeadTargets); if
> + (!NewRefSCCs.empty()) {
> + // The old RefSCC is dead, mark it as such.
> + UR.InvalidatedRefSCCs.insert(RC);
> +
> + // Note that we don't bother to invalidate analyses as ref-edge
> + // connectivity is not really observable in any way and is intended
> + // exclusively to be used for ordering of transforms rather than for
> + // analysis conclusions.
> +
> + // Update RC to the "bottom".
> + assert(G.lookupSCC(N) == C && "Changed the SCC when splitting RefSCCs!");
> + RC = &C->getOuterRefSCC();
> + assert(G.lookupRefSCC(N) == RC && "Failed to update current
> + RefSCC!");
> +
> + // The RC worklist is in reverse postorder, so we enqueue the new ones in
> + // RPO except for the one which contains the source node as that is the
> + // "bottom" we will continue processing in the bottom-up walk.
> + assert(NewRefSCCs.front() == RC &&
> + "New current RefSCC not first in the returned list!");
> + for (RefSCC *NewRC :
> + reverse(make_range(std::next(NewRefSCCs.begin()),
> NewRefSCCs.end()))) {
> + assert(NewRC != RC && "Should not encounter the current RefSCC further
> "
> + "in the postorder list of new RefSCCs.");
> + UR.RCWorklist.insert(NewRC);
> if (DebugLogging)
> - dbgs() << "Enqueuing the existing RefSCC in the update worklist: "
> - << *RC << "\n";
> - // Update the RC to the "bottom".
> - assert(G.lookupSCC(N) == C && "Changed the SCC when splitting
> RefSCCs!");
> - RC = &C->getOuterRefSCC();
> - assert(G.lookupRefSCC(N) == RC && "Failed to update current RefSCC!");
> - assert(NewRefSCCs.front() == RC &&
> - "New current RefSCC not first in the returned list!");
> - for (RefSCC *NewRC : reverse(
> - make_range(std::next(NewRefSCCs.begin()), NewRefSCCs.end())))
> {
> - assert(NewRC != RC && "Should not encounter the current RefSCC
> further "
> - "in the postorder list of new RefSCCs.");
> - UR.RCWorklist.insert(NewRC);
> - if (DebugLogging)
> - dbgs() << "Enqueuing a new RefSCC in the update worklist: " <<
> *NewRC
> - << "\n";
> - }
> + dbgs() << "Enqueuing a new RefSCC in the update worklist: " << *NewRC
> + << "\n";
> }
> }
>
>
> Modified: llvm/trunk/lib/Analysis/LazyCallGraph.cpp
> URL: http://llvm.org/viewvc/llvm-
> project/llvm/trunk/lib/Analysis/LazyCallGraph.cpp?rev=310450&r1=310449&r2=3104
> 50&view=diff
> ==============================================================================
> --- llvm/trunk/lib/Analysis/LazyCallGraph.cpp (original)
> +++ llvm/trunk/lib/Analysis/LazyCallGraph.cpp Wed Aug 9 02:05:27 2017
> @@ -1094,34 +1094,49 @@ void LazyCallGraph::RefSCC::removeOutgoi
> }
>
> SmallVector<LazyCallGraph::RefSCC *, 1> -
> LazyCallGraph::RefSCC::removeInternalRefEdge(Node &SourceN, Node &TargetN) {
> - assert(!(*SourceN)[TargetN].isCall() &&
> - "Cannot remove a call edge, it must first be made a ref edge");
> +LazyCallGraph::RefSCC::removeInternalRefEdge(Node &SourceN,
> + ArrayRef<Node *> TargetNs)
> +{
> + // We return a list of the resulting *new* RefSCCs in post-order.
> + SmallVector<RefSCC *, 1> Result;
>
> #ifndef NDEBUG
> - // In a debug build, verify the RefSCC is valid to start with and when this
> - // routine finishes.
> + // In a debug build, verify the RefSCC is valid to start with and
> + that either // we return an empty list of result RefSCCs and this
> + RefSCC remains valid, // or we return new RefSCCs and this RefSCC is dead.
> verify();
> - auto VerifyOnExit = make_scope_exit([&]() { verify(); });
> + auto VerifyOnExit = make_scope_exit([&]() {
> + if (Result.empty()) {
> + verify();
> + } else {
> + assert(!G && "A dead RefSCC should have its graph pointer nulled.");
> + assert(SCCs.empty() && "A dead RefSCC should have no SCCs in it.");
> + for (RefSCC *RC : Result)
> + RC->verify();
> + }
> + });
> #endif
>
> - // First remove the actual edge.
> - bool Removed = SourceN->removeEdgeInternal(TargetN);
> - (void)Removed;
> - assert(Removed && "Target not in the edge set for this caller?");
> -
> - // We return a list of the resulting *new* RefSCCs in post-order.
> - SmallVector<RefSCC *, 1> Result;
> + // First remove the actual edges.
> + for (Node *TargetN : TargetNs) {
> + assert(!(*SourceN)[*TargetN].isCall() &&
> + "Cannot remove a call edge, it must first be made a ref
> + edge");
> +
> + bool Removed = SourceN->removeEdgeInternal(*TargetN);
> + (void)Removed;
> + assert(Removed && "Target not in the edge set for this caller?");
> + }
>
> - // Direct recursion doesn't impact the SCC graph at all.
> - if (&SourceN == &TargetN)
> + // Direct self references don't impact the ref graph at all.
> + if (llvm::all_of(TargetNs,
> + [&](Node *TargetN) { return &SourceN == TargetN; }))
> return Result;
>
> - // If this ref edge is within an SCC then there are sufficient other edges
> to
> - // form a cycle without this edge so removing it is a no-op.
> + // If all targets are in the same SCC as the source, because no call
> + edges // were removed there is no RefSCC structure change.
> SCC &SourceC = *G->lookupSCC(SourceN);
> - SCC &TargetC = *G->lookupSCC(TargetN);
> - if (&SourceC == &TargetC)
> + if (llvm::all_of(TargetNs, [&](Node *TargetN) {
> + return G->lookupSCC(*TargetN) == &SourceC;
> + }))
> return Result;
>
> // We build somewhat synthetic new RefSCCs by providing a postorder mapping
> @@ -1129,33 +1144,13 @@ LazyCallGraph::RefSCC::removeInternalRef
> // than SCCs because this saves a round-trip through the node->SCC map and
> in
> // the common case, SCCs are small. We will verify that we always give the
> // same number to every node in the SCC such that these are equivalent.
> - const int RootPostOrderNumber = 0;
> - int PostOrderNumber = RootPostOrderNumber + 1;
> + int PostOrderNumber = 0;
> SmallDenseMap<Node *, int> PostOrderMapping;
>
> - // Every node in the target SCC can already reach every node in this RefSCC
> - // (by definition). It is the only node we know will stay inside this
> RefSCC.
> - // Everything which transitively reaches Target will also remain in the
> - // RefSCC. We handle this by pre-marking that the nodes in the target SCC
> map
> - // back to the root post order number.
> - //
> - // This also enables us to take a very significant short-cut in the
> standard
> - // Tarjan walk to re-form RefSCCs below: whenever we build an edge that
> - // references the target node, we know that the target node eventually
> - // references all other nodes in our walk. As a consequence, we can detect
> - // and handle participants in that cycle without walking all the edges that
> - // form the connections, and instead by relying on the fundamental
> guarantee
> - // coming into this operation.
> - for (Node &N : TargetC)
> - PostOrderMapping[&N] = RootPostOrderNumber;
> -
> // Reset all the other nodes to prepare for a DFS over them, and add them
> to
> // our worklist.
> SmallVector<Node *, 8> Worklist;
> for (SCC *C : SCCs) {
> - if (C == &TargetC)
> - continue;
> -
> for (Node &N : *C)
> N.DFSNumber = N.LowLink = 0;
>
> @@ -1212,26 +1207,6 @@ LazyCallGraph::RefSCC::removeInternalRef
> continue;
> }
> if (ChildN.DFSNumber == -1) {
> - // Check if this edge's target node connects to the deleted edge's
> - // target node. If so, we know that every node connected will end
> up
> - // in this RefSCC, so collapse the entire current stack into the
> root
> - // slot in our SCC numbering. See above for the motivation of
> - // optimizing the target connected nodes in this way.
> - auto PostOrderI = PostOrderMapping.find(&ChildN);
> - if (PostOrderI != PostOrderMapping.end() &&
> - PostOrderI->second == RootPostOrderNumber) {
> - MarkNodeForSCCNumber(*N, RootPostOrderNumber);
> - while (!PendingRefSCCStack.empty())
> - MarkNodeForSCCNumber(*PendingRefSCCStack.pop_back_val(),
> - RootPostOrderNumber);
> - while (!DFSStack.empty())
> - MarkNodeForSCCNumber(*DFSStack.pop_back_val().first,
> - RootPostOrderNumber);
> - // Ensure we break all the way out of the enclosing loop.
> - N = nullptr;
> - break;
> - }
> -
> // If this child isn't currently in this RefSCC, no need to process
> // it.
> ++I;
> @@ -1246,9 +1221,6 @@ LazyCallGraph::RefSCC::removeInternalRef
> N->LowLink = ChildN.LowLink;
> ++I;
> }
> - if (!N)
> - // We short-circuited this node.
> - break;
>
> // We've finished processing N and its descendents, put it on our
> pending
> // stack to eventually get merged into a RefSCC.
> @@ -1287,32 +1259,31 @@ LazyCallGraph::RefSCC::removeInternalRef
> assert(PendingRefSCCStack.empty() && "Didn't flush all pending nodes!");
> } while (!Worklist.empty());
>
> - // We now have a post-order numbering for RefSCCs and a mapping from each
> - // node in this RefSCC to its final RefSCC. We create each new RefSCC node
> - // (re-using this RefSCC node for the root) and build a radix-sort style
> map
> - // from postorder number to the RefSCC. We then append SCCs to each of
> these
> - // RefSCCs in the order they occured in the original SCCs container.
> - for (int i = 1; i < PostOrderNumber; ++i)
> + // If we only ever needed one post-order number, we reformed a
> + ref-cycle for // every node so the RefSCC remains unchanged.
> + if (PostOrderNumber == 1)
> + return Result;
> +
> + // Otherwise we create a collection of new RefSCC nodes and build //
> + a radix-sort style map from postorder number to these new RefSCCs. We
> + then // append SCCs to each of these RefSCCs in the order they
> + occured in the // original SCCs container.
> + for (int i = 0; i < PostOrderNumber; ++i)
> Result.push_back(G->createRefSCC(*G));
>
> // Insert the resulting postorder sequence into the global graph postorder
> - // sequence before the current RefSCC in that sequence. The idea being that
> - // this RefSCC is the target of the reference edge removed, and thus has
> - // a direct or indirect edge to every other RefSCC formed and so must be at
> - // the end of any postorder traversal.
> + // sequence before the current RefSCC in that sequence, and then
> + remove the // current one.
> //
> // FIXME: It'd be nice to change the APIs so that we returned an iterator
> // range over the global postorder sequence and generally use that sequence
> // rather than building a separate result vector here.
> - if (!Result.empty()) {
> - int Idx = G->getRefSCCIndex(*this);
> - G->PostOrderRefSCCs.insert(G->PostOrderRefSCCs.begin() + Idx,
> - Result.begin(), Result.end());
> - for (int i : seq<int>(Idx, G->PostOrderRefSCCs.size()))
> - G->RefSCCIndices[G->PostOrderRefSCCs[i]] = i;
> - assert(G->PostOrderRefSCCs[G->getRefSCCIndex(*this)] == this &&
> - "Failed to update this RefSCC's index after insertion!");
> - }
> + int Idx = G->getRefSCCIndex(*this);
> + G->PostOrderRefSCCs.erase(G->PostOrderRefSCCs.begin() + Idx);
> + G->PostOrderRefSCCs.insert(G->PostOrderRefSCCs.begin() + Idx,
> Result.begin(),
> + Result.end()); for (int i : seq<int>(Idx,
> + G->PostOrderRefSCCs.size()))
> + G->RefSCCIndices[G->PostOrderRefSCCs[i]] = i;
>
> for (SCC *C : SCCs) {
> auto PostOrderI = PostOrderMapping.find(&*C->begin());
> @@ -1324,33 +1295,19 @@ LazyCallGraph::RefSCC::removeInternalRef
> assert(PostOrderMapping.find(&N)->second == SCCNumber &&
> "Cannot have different numbers for nodes in the same SCC!");
> #endif
> - if (SCCNumber == 0)
> - // The root node is handled separately by removing the SCCs.
> - continue;
>
> - RefSCC &RC = *Result[SCCNumber - 1];
> + RefSCC &RC = *Result[SCCNumber];
> int SCCIndex = RC.SCCs.size();
> RC.SCCs.push_back(C);
> RC.SCCIndices[C] = SCCIndex;
> C->OuterRefSCC = &RC;
> }
>
> - // Now erase all but the root's SCCs.
> - SCCs.erase(remove_if(SCCs,
> - [&](SCC *C) {
> - return PostOrderMapping.lookup(&*C->begin()) !=
> - RootPostOrderNumber;
> - }),
> - SCCs.end());
> + // Now that we've moved things into the new RefSCCs, clear out our
> + current // one.
> + G = nullptr;
> + SCCs.clear();
> SCCIndices.clear();
> - for (int i = 0, Size = SCCs.size(); i < Size; ++i)
> - SCCIndices[SCCs[i]] = i;
> -
> -#ifndef NDEBUG
> - // Verify all of the new RefSCCs.
> - for (RefSCC *RC : Result)
> - RC->verify();
> -#endif
>
> // Return the new list of SCCs.
> return Result;
>
> Modified: llvm/trunk/unittests/Analysis/LazyCallGraphTest.cpp
> URL: http://llvm.org/viewvc/llvm-
> project/llvm/trunk/unittests/Analysis/LazyCallGraphTest.cpp?rev=310450&r1=3104
> 49&r2=310450&view=diff
> ==============================================================================
> --- llvm/trunk/unittests/Analysis/LazyCallGraphTest.cpp (original)
> +++ llvm/trunk/unittests/Analysis/LazyCallGraphTest.cpp Wed Aug 9
> +++ 02:05:27 2017
> @@ -1166,20 +1166,21 @@ TEST(LazyCallGraphTest, InlineAndDeleteF
> LazyCallGraph::SCC &NewDC = *NewCs.begin();
> EXPECT_EQ(&NewDC, CG.lookupSCC(D1));
> EXPECT_EQ(&NewDC, CG.lookupSCC(D3));
> - auto NewRCs = DRC.removeInternalRefEdge(D1, D2);
> - EXPECT_EQ(&DRC, CG.lookupRefSCC(D2));
> - EXPECT_EQ(NewRCs.end(), std::next(NewRCs.begin()));
> - LazyCallGraph::RefSCC &NewDRC = **NewRCs.begin();
> + auto NewRCs = DRC.removeInternalRefEdge(D1, {&D2}); ASSERT_EQ(2u,
> + NewRCs.size()); LazyCallGraph::RefSCC &NewDRC = *NewRCs[0];
> EXPECT_EQ(&NewDRC, CG.lookupRefSCC(D1));
> EXPECT_EQ(&NewDRC, CG.lookupRefSCC(D3));
> - EXPECT_FALSE(NewDRC.isParentOf(DRC));
> - EXPECT_TRUE(CRC.isParentOf(DRC));
> + LazyCallGraph::RefSCC &D2RC = *NewRCs[1]; EXPECT_EQ(&D2RC,
> + CG.lookupRefSCC(D2)); EXPECT_FALSE(NewDRC.isParentOf(D2RC));
> + EXPECT_TRUE(CRC.isParentOf(D2RC));
> EXPECT_TRUE(CRC.isParentOf(NewDRC));
> - EXPECT_TRUE(DRC.isParentOf(NewDRC));
> + EXPECT_TRUE(D2RC.isParentOf(NewDRC));
> CRC.removeOutgoingEdge(C1, D2);
> - EXPECT_FALSE(CRC.isParentOf(DRC));
> + EXPECT_FALSE(CRC.isParentOf(D2RC));
> EXPECT_TRUE(CRC.isParentOf(NewDRC));
> - EXPECT_TRUE(DRC.isParentOf(NewDRC));
> + EXPECT_TRUE(D2RC.isParentOf(NewDRC));
>
> // Now that we've updated the call graph, D2 is dead, so remove it.
> CG.removeDeadFunction(D2F);
> @@ -1340,7 +1341,7 @@ TEST(LazyCallGraphTest, InternalEdgeRemo
> // Remove the edge from b -> a, which should leave the 3 functions still in
> // a single connected component because of a -> b -> c -> a.
> SmallVector<LazyCallGraph::RefSCC *, 1> NewRCs =
> - RC.removeInternalRefEdge(B, A);
> + RC.removeInternalRefEdge(B, {&A});
> EXPECT_EQ(0u, NewRCs.size());
> EXPECT_EQ(&RC, CG.lookupRefSCC(A));
> EXPECT_EQ(&RC, CG.lookupRefSCC(B));
> @@ -1350,24 +1351,94 @@ TEST(LazyCallGraphTest, InternalEdgeRemo
> EXPECT_EQ(&RC, &*J);
> EXPECT_EQ(E, std::next(J));
>
> + // Increment I before we actually mutate the structure so that it
> + remains // a valid iterator.
> + ++I;
> +
> // Remove the edge from c -> a, which should leave 'a' in the original
> RefSCC
> // and form a new RefSCC for 'b' and 'c'.
> - NewRCs = RC.removeInternalRefEdge(C, A);
> - EXPECT_EQ(1u, NewRCs.size());
> - EXPECT_EQ(&RC, CG.lookupRefSCC(A));
> - EXPECT_EQ(1, std::distance(RC.begin(), RC.end()));
> - LazyCallGraph::RefSCC &RC2 = *CG.lookupRefSCC(B);
> - EXPECT_EQ(&RC2, CG.lookupRefSCC(C));
> - EXPECT_EQ(&RC2, NewRCs[0]);
> + NewRCs = RC.removeInternalRefEdge(C, {&A}); ASSERT_EQ(2u,
> + NewRCs.size()); LazyCallGraph::RefSCC &BCRC = *NewRCs[0];
> + LazyCallGraph::RefSCC &ARC = *NewRCs[1]; EXPECT_EQ(&ARC,
> + CG.lookupRefSCC(A)); EXPECT_EQ(1, std::distance(ARC.begin(),
> + ARC.end())); EXPECT_EQ(&BCRC, CG.lookupRefSCC(B)); EXPECT_EQ(&BCRC,
> + CG.lookupRefSCC(C));
> J = CG.postorder_ref_scc_begin();
> EXPECT_NE(I, J);
> - EXPECT_EQ(&RC2, &*J);
> + EXPECT_EQ(&BCRC, &*J);
> + ++J;
> + EXPECT_NE(I, J);
> + EXPECT_EQ(&ARC, &*J);
> ++J;
> EXPECT_EQ(I, J);
> - EXPECT_EQ(&RC, &*J);
> + EXPECT_EQ(E, J);
> +}
> +
> +TEST(LazyCallGraphTest, InternalMultiEdgeRemoval) {
> + LLVMContext Context;
> + // A nice fully connected (including self-edges) RefSCC.
> + std::unique_ptr<Module> M = parseAssembly(
> + Context, "define void @a(i8** %ptr) {\n"
> + "entry:\n"
> + " store i8* bitcast (void(i8**)* @a to i8*), i8** %ptr\n"
> + " store i8* bitcast (void(i8**)* @b to i8*), i8** %ptr\n"
> + " store i8* bitcast (void(i8**)* @c to i8*), i8** %ptr\n"
> + " ret void\n"
> + "}\n"
> + "define void @b(i8** %ptr) {\n"
> + "entry:\n"
> + " store i8* bitcast (void(i8**)* @a to i8*), i8** %ptr\n"
> + " store i8* bitcast (void(i8**)* @b to i8*), i8** %ptr\n"
> + " store i8* bitcast (void(i8**)* @c to i8*), i8** %ptr\n"
> + " ret void\n"
> + "}\n"
> + "define void @c(i8** %ptr) {\n"
> + "entry:\n"
> + " store i8* bitcast (void(i8**)* @a to i8*), i8** %ptr\n"
> + " store i8* bitcast (void(i8**)* @b to i8*), i8** %ptr\n"
> + " store i8* bitcast (void(i8**)* @c to i8*), i8** %ptr\n"
> + " ret void\n"
> + "}\n");
> + LazyCallGraph CG = buildCG(*M);
> +
> + // Force the graph to be fully expanded.
> + CG.buildRefSCCs();
> + auto I = CG.postorder_ref_scc_begin(), E =
> + CG.postorder_ref_scc_end(); LazyCallGraph::RefSCC &RC = *I;
> + EXPECT_EQ(E, std::next(I));
> +
> + LazyCallGraph::Node &A = *CG.lookup(lookupFunction(*M, "a"));
> + LazyCallGraph::Node &B = *CG.lookup(lookupFunction(*M, "b"));
> + LazyCallGraph::Node &C = *CG.lookup(lookupFunction(*M, "c"));
> + EXPECT_EQ(&RC, CG.lookupRefSCC(A)); EXPECT_EQ(&RC,
> + CG.lookupRefSCC(B)); EXPECT_EQ(&RC, CG.lookupRefSCC(C));
> +
> + // Increment I before we actually mutate the structure so that it
> + remains // a valid iterator.
> ++I;
> - EXPECT_EQ(E, I);
> +
> + // Remove the edges from b -> a and b -> c, leaving b in its own RefSCC.
> + SmallVector<LazyCallGraph::RefSCC *, 1> NewRCs =
> + RC.removeInternalRefEdge(B, {&A, &C});
> +
> + ASSERT_EQ(2u, NewRCs.size());
> + LazyCallGraph::RefSCC &BRC = *NewRCs[0]; LazyCallGraph::RefSCC &ACRC
> + = *NewRCs[1]; EXPECT_EQ(&BRC, CG.lookupRefSCC(B)); EXPECT_EQ(1,
> + std::distance(BRC.begin(), BRC.end())); EXPECT_EQ(&ACRC,
> + CG.lookupRefSCC(A)); EXPECT_EQ(&ACRC, CG.lookupRefSCC(C)); auto J =
> + CG.postorder_ref_scc_begin(); EXPECT_NE(I, J); EXPECT_EQ(&BRC, &*J);
> ++J;
> + EXPECT_NE(I, J);
> + EXPECT_EQ(&ACRC, &*J);
> + ++J;
> + EXPECT_EQ(I, J);
> EXPECT_EQ(E, J);
> }
>
> @@ -1420,7 +1491,7 @@ TEST(LazyCallGraphTest, InternalNoOpEdge
>
> // Remove the edge from a -> c which doesn't change anything.
> SmallVector<LazyCallGraph::RefSCC *, 1> NewRCs =
> - RC.removeInternalRefEdge(AN, CN);
> + RC.removeInternalRefEdge(AN, {&CN});
> EXPECT_EQ(0u, NewRCs.size());
> EXPECT_EQ(&RC, CG.lookupRefSCC(AN));
> EXPECT_EQ(&RC, CG.lookupRefSCC(BN));
> @@ -1435,8 +1506,8 @@ TEST(LazyCallGraphTest, InternalNoOpEdge
>
> // Remove the edge from b -> a and c -> b; again this doesn't change
> // anything.
> - NewRCs = RC.removeInternalRefEdge(BN, AN);
> - NewRCs = RC.removeInternalRefEdge(CN, BN);
> + NewRCs = RC.removeInternalRefEdge(BN, {&AN}); NewRCs =
> + RC.removeInternalRefEdge(CN, {&BN});
> EXPECT_EQ(0u, NewRCs.size());
> EXPECT_EQ(&RC, CG.lookupRefSCC(AN));
> EXPECT_EQ(&RC, CG.lookupRefSCC(BN));
>
>
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-commits
More information about the llvm-commits
mailing list