[llvm] r200903 - [PM] Add a new "lazy" call graph analysis pass for the new pass manager.

Chandler Carruth chandlerc at gmail.com
Thu Feb 6 00:28:14 PST 2014


This was fixed a couple of commits later I think?


On Wed, Feb 5, 2014 at 11:48 PM, Kostya Serebryany <kcc at google.com> wrote:

> This makes our bootstrap bot sad...
>
> http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-bootstrap/builds/2043/steps/check-llvm%20msan/logs/stdio
>
> FAIL: LLVM :: Analysis/LazyCallGraph/basic.ll (127 of 9712)
> ******************** TEST 'LLVM :: Analysis/LazyCallGraph/basic.ll' FAILED ********************
> Script:
> --
> /home/dtoolsbot/build/sanitizer-x86_64-linux-bootstrap/build/llvm_build_msan/./bin/opt -disable-output -passes=print-cg /home/dtoolsbot/build/sanitizer-x86_64-linux-bootstrap/build/llvm/test/Analysis/LazyCallGraph/basic.ll 2>&1 | /home/dtoolsbot/build/sanitizer-x86_64-linux-bootstrap/build/llvm_build_msan/./bin/FileCheck /home/dtoolsbot/build/sanitizer-x86_64-linux-bootstrap/build/llvm/test/Analysis/LazyCallGraph/basic.ll
> --
> Exit Code: 1
>
> Command Output (stderr):
> --
> /home/dtoolsbot/build/sanitizer-x86_64-linux-bootstrap/build/llvm/test/Analysis/LazyCallGraph/basic.ll:6:16: error: expected string not found in input
> ; CHECK-LABEL: Call edges in function: f
>                ^
> <stdin>:1:1: note: scanning from here
> Printing the call graph for module: /home/dtoolsbot/build/sanitizer-x86_64-linux-bootstrap/build/llvm/test/Analysis/LazyCallGraph/basic.ll
> ^
> <stdin>:1:14: note: possible intended match here
> Printing the call graph for module: /home/dtoolsbot/build/sanitizer-x86_64-linux-bootstrap/build/llvm/test/Analysis/LazyCallGraph/basic.ll
>              ^
>
>
>
>
> On Thu, Feb 6, 2014 at 8:37 AM, Chandler Carruth <chandlerc at gmail.com>wrote:
>
>> Author: chandlerc
>> Date: Wed Feb  5 22:37:03 2014
>> New Revision: 200903
>>
>> URL: http://llvm.org/viewvc/llvm-project?rev=200903&view=rev
>> Log:
>> [PM] Add a new "lazy" call graph analysis pass for the new pass manager.
>>
>> The primary motivation for this pass is to separate the call graph
>> analysis used by the new pass manager's CGSCC pass management from the
>> existing call graph analysis pass. That analysis pass is (somewhat
>> unfortunately) over-constrained by the existing CallGraphSCCPassManager
>> requirements. Those requirements make it *really* hard to cleanly layer
>> the needed functionality for the new pass manager on top of the existing
>> analysis.
>>
>> However, there are also a bunch of things that the pass manager would
>> specifically benefit from doing differently from the existing call graph
>> analysis, and this new implementation tries to address several of them:
>>
>> - Be lazy about scanning function definitions. The existing pass eagerly
>>   scans the entire module to build the initial graph. This new pass is
>>   significantly more lazy, and I plan to push this even further to
>>   maximize locality during CGSCC walks.
>> - Don't use a single synthetic node to partition functions with an
>>   indirect call from functions whose address is taken. This node creates
>>   a huge choke-point which would preclude good parallelization across
>>   the fanout of the SCC graph when we got to the point of looking at
>>   such changes to LLVM.
>> - Use a memory dense and lightweight representation of the call graph
>>   rather than value handles and tracking call instructions. This will
>>   require explicit update calls instead of some updates working
>>   transparently, but should end up being significantly more efficient.
>>   The explicit update calls ended up being needed in many cases for the
>>   existing call graph so we don't really lose anything.
>> - Doesn't explicitly model SCCs and thus doesn't provide an "identity"
>>   for an SCC which is stable across updates. This is essential for the
>>   new pass manager to work correctly.
>> - Only form the graph necessary for traversing all of the functions in
>>   an SCC friendly order. This is a much simpler graph structure and
>>   should be more memory dense. It does limit the ways in which it is
>>   appropriate to use this analysis. I wish I had a better name than
>>   "call graph". I've commented extensively this aspect.
>>
>> This is still very much a WIP, in fact it is really just the initial
>> bits. But it is about the fourth version of the initial bits that I've
>> implemented with each of the others running into really frustrating
>> problms. This looks like it will actually work and I'd like to split the
>> actual complexity across commits for the sake of my reviewers. =] The
>> rest of the implementation along with lots of wiring will follow
>> somewhat more rapidly now that there is a good path forward.
>>
>> Naturally, this doesn't impact any of the existing optimizer. This code
>> is specific to the new pass manager.
>>
>> A bunch of thanks are deserved for the various folks that have helped
>> with the design of this, especially Nick Lewycky who actually sat with
>> me to go through the fundamentals of the final version here.
>>
>> Added:
>>     llvm/trunk/include/llvm/Analysis/LazyCallGraph.h
>>     llvm/trunk/lib/Analysis/LazyCallGraph.cpp
>>     llvm/trunk/test/Analysis/LazyCallGraph/
>>     llvm/trunk/test/Analysis/LazyCallGraph/basic.ll
>> Modified:
>>     llvm/trunk/lib/Analysis/CMakeLists.txt
>>     llvm/trunk/tools/opt/NewPMDriver.cpp
>>     llvm/trunk/tools/opt/Passes.cpp
>>
>> Added: llvm/trunk/include/llvm/Analysis/LazyCallGraph.h
>> URL:
>> http://llvm.org/viewvc/llvm-project/llvm/trunk/include/llvm/Analysis/LazyCallGraph.h?rev=200903&view=auto
>>
>> ==============================================================================
>> --- llvm/trunk/include/llvm/Analysis/LazyCallGraph.h (added)
>> +++ llvm/trunk/include/llvm/Analysis/LazyCallGraph.h Wed Feb  5 22:37:03
>> 2014
>> @@ -0,0 +1,337 @@
>> +//===- LazyCallGraph.h - Analysis of a Module's call graph ------*- C++
>> -*-===//
>> +//
>> +//                     The LLVM Compiler Infrastructure
>> +//
>> +// This file is distributed under the University of Illinois Open Source
>> +// License. See LICENSE.TXT for details.
>> +//
>>
>> +//===----------------------------------------------------------------------===//
>> +/// \file
>> +///
>> +/// Implements a lazy call graph analysis and related passes for the new
>> pass
>> +/// manager.
>> +///
>> +/// NB: This is *not* a traditional call graph! It is a graph which
>> models both
>> +/// the current calls and potential calls. As a consequence there are
>> many
>> +/// edges in this call graph that do not correspond to a 'call' or
>> 'invoke'
>> +/// instruction.
>> +///
>> +/// The primary use cases of this graph analysis is to facilitate
>> iterating
>> +/// across the functions of a module in ways that ensure all callees are
>> +/// visited prior to a caller (given any SCC constraints), or vice
>> versa. As
>> +/// such is it particularly well suited to organizing CGSCC
>> optimizations such
>> +/// as inlining, outlining, argument promotion, etc. That is its primary
>> use
>> +/// case and motivates the design. It may not be appropriate for other
>> +/// purposes. The use graph of functions or some other conservative
>> analysis of
>> +/// call instructions may be interesting for optimizations and subsequent
>> +/// analyses which don't work in the context of an overly specified
>> +/// potential-call-edge graph.
>> +///
>> +/// To understand the specific rules and nature of this call graph
>> analysis,
>> +/// see the documentation of the \c LazyCallGraph below.
>> +///
>>
>> +//===----------------------------------------------------------------------===//
>> +
>> +#ifndef LLVM_ANALYSIS_LAZY_CALL_GRAPH
>> +#define LLVM_ANALYSIS_LAZY_CALL_GRAPH
>> +
>> +#include "llvm/ADT/DenseMap.h"
>> +#include "llvm/ADT/PointerUnion.h"
>> +#include "llvm/ADT/SmallVector.h"
>> +#include "llvm/ADT/SmallPtrSet.h"
>> +#include "llvm/ADT/STLExtras.h"
>> +#include "llvm/IR/Module.h"
>> +#include "llvm/IR/Function.h"
>> +#include "llvm/IR/BasicBlock.h"
>> +#include "llvm/Support/Allocator.h"
>> +#include <iterator>
>> +
>> +namespace llvm {
>> +class ModuleAnalysisManager;
>> +class PreservedAnalyses;
>> +class raw_ostream;
>> +
>> +/// \brief A lazily constructed view of the call graph of a module.
>> +///
>> +/// With the edges of this graph, the motivating constraint that we are
>> +/// attempting to maintain is that function-local optimization,
>> CGSCC-local
>> +/// optimizations, and optimizations transforming a pair of functions
>> connected
>> +/// by an edge in the graph, do not invalidate a bottom-up traversal of
>> the SCC
>> +/// DAG. That is, no optimizations will delete, remove, or add an edge
>> such
>> +/// that functions already visited in a bottom-up order of the SCC DAG
>> are no
>> +/// longer valid to have visited, or such that functions not yet visited
>> in
>> +/// a bottom-up order of the SCC DAG are not required to have already
>> been
>> +/// visited.
>> +///
>> +/// Within this constraint, the desire is to minimize the merge points
>> of the
>> +/// SCC DAG. The greater the fanout of the SCC DAG and the fewer merge
>> points
>> +/// in the SCC DAG, the more independence there is in optimizing within
>> it.
>> +/// There is a strong desire to enable parallelization of optimizations
>> over
>> +/// the call graph, and both limited fanout and merge points will
>> (artificially
>> +/// in some cases) limit the scaling of such an effort.
>> +///
>> +/// To this end, graph represents both direct and any potential
>> resolution to
>> +/// an indirect call edge. Another way to think about it is that it
>> represents
>> +/// both the direct call edges and any direct call edges that might be
>> formed
>> +/// through static optimizations. Specifically, it considers taking the
>> address
>> +/// of a function to be an edge in the call graph because this might be
>> +/// forwarded to become a direct call by some subsequent function-local
>> +/// optimization. The result is that the graph closely follows the
>> use-def
>> +/// edges for functions. Walking "up" the graph can be done by looking
>> at all
>> +/// of the uses of a function.
>> +///
>> +/// The roots of the call graph are the external functions and functions
>> +/// escaped into global variables. Those functions can be called from
>> outside
>> +/// of the module or via unknowable means in the IR -- we may not be
>> able to
>> +/// form even a potential call edge from a function body which may
>> dynamically
>> +/// load the function and call it.
>> +///
>> +/// This analysis still requires updates to remain valid after
>> optimizations
>> +/// which could potentially change the set of potential callees. The
>> +/// constraints it operates under only make the traversal order remain
>> valid.
>> +///
>> +/// The entire analysis must be re-computed if full interprocedural
>> +/// optimizations run at any point. For example, globalopt completely
>> +/// invalidates the information in this analysis.
>> +///
>> +/// FIXME: This class is named LazyCallGraph in a lame attempt to
>> distinguish
>> +/// it from the existing CallGraph. At some point, it is expected that
>> this
>> +/// will be the only call graph and it will be renamed accordingly.
>> +class LazyCallGraph {
>> +public:
>> +  class Node;
>> +  typedef SmallVector<PointerUnion<Function *, Node *>, 4> NodeVectorT;
>> +  typedef SmallVectorImpl<PointerUnion<Function *, Node *> >
>> NodeVectorImplT;
>> +
>> +  /// \brief A lazy iterator used for both the entry nodes and child
>> nodes.
>> +  ///
>> +  /// When this iterator is dereferenced, if not yet available, a
>> function will
>> +  /// be scanned for "calls" or uses of functions and its child
>> information
>> +  /// will be constructed. All of these results are accumulated and
>> cached in
>> +  /// the graph.
>> +  class iterator : public std::iterator<std::bidirectional_iterator_tag,
>> Node *,
>> +                                        ptrdiff_t, Node *, Node *> {
>> +    friend class LazyCallGraph;
>> +    friend class LazyCallGraph::Node;
>> +    typedef std::iterator<std::bidirectional_iterator_tag, Node *,
>> ptrdiff_t,
>> +                          Node *, Node *> BaseT;
>> +
>> +    /// \brief Nonce type to select the constructor for the end iterator.
>> +    struct IsAtEndT {};
>> +
>> +    LazyCallGraph &G;
>> +    NodeVectorImplT::iterator NI;
>> +
>> +    // Build the begin iterator for a node.
>> +    explicit iterator(LazyCallGraph &G, NodeVectorImplT &Nodes)
>> +        : G(G), NI(Nodes.begin()) {}
>> +
>> +    // Build the end iterator for a node. This is selected purely by
>> overload.
>> +    iterator(LazyCallGraph &G, NodeVectorImplT &Nodes, IsAtEndT
>> /*Nonce*/)
>> +        : G(G), NI(Nodes.end()) {}
>> +
>> +  public:
>> +    iterator(const iterator &Arg) : G(Arg.G), NI(Arg.NI) {}
>> +
>> +    iterator &operator=(iterator Arg) {
>> +      std::swap(Arg, *this);
>> +      return *this;
>> +    }
>> +
>> +    bool operator==(const iterator &Arg) { return NI == Arg.NI; }
>> +    bool operator!=(const iterator &Arg) { return !operator==(Arg); }
>> +
>> +    reference operator*() const {
>> +      if (NI->is<Node *>())
>> +        return NI->get<Node *>();
>> +
>> +      Function *F = NI->get<Function *>();
>> +      Node *ChildN = G.get(*F);
>> +      *NI = ChildN;
>> +      return ChildN;
>> +    }
>> +    pointer operator->() const { return operator*(); }
>> +
>> +    iterator &operator++() {
>> +      ++NI;
>> +      return *this;
>> +    }
>> +    iterator operator++(int) {
>> +      iterator prev = *this;
>> +      ++*this;
>> +      return prev;
>> +    }
>> +
>> +    iterator &operator--() {
>> +      --NI;
>> +      return *this;
>> +    }
>> +    iterator operator--(int) {
>> +      iterator next = *this;
>> +      --*this;
>> +      return next;
>> +    }
>> +  };
>> +
>> +  /// \brief Construct a graph for the given module.
>> +  ///
>> +  /// This sets up the graph and computes all of the entry points of the
>> graph.
>> +  /// No function definitions are scanned until their nodes in the graph
>> are
>> +  /// requested during traversal.
>> +  LazyCallGraph(Module &M);
>> +
>> +  /// \brief Copy constructor.
>> +  ///
>> +  /// This does a deep copy of the graph. It does no verification that
>> the
>> +  /// graph remains valid for the module. It is also relatively
>> expensive.
>> +  LazyCallGraph(const LazyCallGraph &G);
>> +
>> +#if LLVM_HAS_RVALUE_REFERENCES
>> +  /// \brief Move constructor.
>> +  ///
>> +  /// This is a deep move. It leaves G in an undefined but destroyable
>> state.
>> +  /// Any other operation on G is likely to fail.
>> +  LazyCallGraph(LazyCallGraph &&G);
>> +#endif
>> +
>> +  iterator begin() { return iterator(*this, EntryNodes); }
>> +  iterator end() { return iterator(*this, EntryNodes,
>> iterator::IsAtEndT()); }
>> +
>> +  /// \brief Lookup a function in the graph which has already been
>> scanned and
>> +  /// added.
>> +  Node *lookup(const Function &F) const { return NodeMap.lookup(&F); }
>> +
>> +  /// \brief Get a graph node for a given function, scanning it to
>> populate the
>> +  /// graph data as necessary.
>> +  Node *get(Function &F) {
>> +    Node *&N = NodeMap[&F];
>> +    if (N)
>> +      return N;
>> +
>> +    return insertInto(F, N);
>> +  }
>> +
>> +private:
>> +  Module &M;
>> +
>> +  /// \brief Allocator that holds all the call graph nodes.
>> +  SpecificBumpPtrAllocator<Node> BPA;
>> +
>> +  /// \brief Maps function->node for fast lookup.
>> +  DenseMap<const Function *, Node *> NodeMap;
>> +
>> +  /// \brief The entry nodes to the graph.
>> +  ///
>> +  /// These nodes are reachable through "external" means. Put another
>> way, they
>> +  /// escape at the module scope.
>> +  NodeVectorT EntryNodes;
>> +
>> +  /// \brief Set of the entry nodes to the graph.
>> +  SmallPtrSet<Function *, 4> EntryNodeSet;
>> +
>> +  /// \brief Helper to insert a new function, with an already looked-up
>> entry in
>> +  /// the NodeMap.
>> +  Node *insertInto(Function &F, Node *&MappedN);
>> +
>> +  /// \brief Helper to copy a node from another graph into this one.
>> +  Node *copyInto(const Node &OtherN);
>> +
>> +#if LLVM_HAS_RVALUE_REFERENCES
>> +  /// \brief Helper to move a node from another graph into this one.
>> +  Node *moveInto(Node &&OtherN);
>> +#endif
>> +};
>> +
>> +/// \brief A node in the call graph.
>> +///
>> +/// This represents a single node. It's primary roles are to cache the
>> list of
>> +/// callees, de-duplicate and provide fast testing of whether a function
>> is
>> +/// a callee, and facilitate iteration of child nodes in the graph.
>> +class LazyCallGraph::Node {
>> +  friend LazyCallGraph;
>> +
>> +  LazyCallGraph &G;
>> +  Function &F;
>> +  mutable NodeVectorT Callees;
>> +  SmallPtrSet<Function *, 4> CalleeSet;
>> +
>> +  /// \brief Basic constructor implements the scanning of F into Callees
>> and
>> +  /// CalleeSet.
>> +  Node(LazyCallGraph &G, Function &F);
>> +
>> +  /// \brief Constructor used when copying a node from one graph to
>> another.
>> +  Node(LazyCallGraph &G, const Node &OtherN);
>> +
>> +#if LLVM_HAS_RVALUE_REFERENCES
>> +  /// \brief Constructor used when moving a node from one graph to
>> another.
>> +  Node(LazyCallGraph &G, Node &&OtherN);
>> +#endif
>> +
>> +public:
>> +  typedef LazyCallGraph::iterator iterator;
>> +
>> +  Function &getFunction() const {
>> +    return F;
>> +  };
>> +
>> +  iterator begin() const { return iterator(G, Callees); }
>> +  iterator end() const { return iterator(G, Callees,
>> iterator::IsAtEndT()); }
>> +
>> +  /// Equality is defined as address equality.
>> +  bool operator==(const Node &N) const { return this == &N; }
>> +  bool operator!=(const Node &N) const { return !operator==(N); }
>> +};
>> +
>> +// Provide GraphTraits specializations for call graphs.
>> +template <> struct GraphTraits<LazyCallGraph::Node *> {
>> +  typedef LazyCallGraph::Node NodeType;
>> +  typedef LazyCallGraph::iterator ChildIteratorType;
>> +
>> +  static NodeType *getEntryNode(NodeType *N) { return N; }
>> +  static ChildIteratorType child_begin(NodeType *N) { return N->begin();
>> }
>> +  static ChildIteratorType child_end(NodeType *N) { return N->end(); }
>> +};
>> +template <> struct GraphTraits<LazyCallGraph *> {
>> +  typedef LazyCallGraph::Node NodeType;
>> +  typedef LazyCallGraph::iterator ChildIteratorType;
>> +
>> +  static NodeType *getEntryNode(NodeType *N) { return N; }
>> +  static ChildIteratorType child_begin(NodeType *N) { return N->begin();
>> }
>> +  static ChildIteratorType child_end(NodeType *N) { return N->end(); }
>> +};
>> +
>> +/// \brief An analysis pass which computes the call graph for a module.
>> +class LazyCallGraphAnalysis {
>> +public:
>> +  /// \brief Inform generic clients of the result type.
>> +  typedef LazyCallGraph Result;
>> +
>> +  static void *ID() { return (void *)&PassID; }
>> +
>> +  /// \brief Compute the \c LazyCallGraph for a the module \c M.
>> +  ///
>> +  /// This just builds the set of entry points to the call graph. The
>> rest is
>> +  /// built lazily as it is walked.
>> +  LazyCallGraph run(Module *M) { return LazyCallGraph(*M); }
>> +
>> +private:
>> +  static char PassID;
>> +};
>> +
>> +/// \brief A pass which prints the call graph to a \c raw_ostream.
>> +///
>> +/// This is primarily useful for testing the analysis.
>> +class LazyCallGraphPrinterPass {
>> +  raw_ostream &OS;
>> +
>> +public:
>> +  explicit LazyCallGraphPrinterPass(raw_ostream &OS);
>> +
>> +  PreservedAnalyses run(Module *M, ModuleAnalysisManager *AM);
>> +
>> +  static StringRef name() { return "LazyCallGraphPrinterPass"; }
>> +};
>> +
>> +}
>> +
>> +#endif
>>
>> Modified: llvm/trunk/lib/Analysis/CMakeLists.txt
>> URL:
>> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Analysis/CMakeLists.txt?rev=200903&r1=200902&r2=200903&view=diff
>>
>> ==============================================================================
>> --- llvm/trunk/lib/Analysis/CMakeLists.txt (original)
>> +++ llvm/trunk/lib/Analysis/CMakeLists.txt Wed Feb  5 22:37:03 2014
>> @@ -23,6 +23,7 @@ add_llvm_library(LLVMAnalysis
>>    InstructionSimplify.cpp
>>    Interval.cpp
>>    IntervalPartition.cpp
>> +  LazyCallGraph.cpp
>>    LazyValueInfo.cpp
>>    LibCallAliasAnalysis.cpp
>>    LibCallSemantics.cpp
>>
>> Added: llvm/trunk/lib/Analysis/LazyCallGraph.cpp
>> URL:
>> http://llvm.org/viewvc/llvm-project/llvm/trunk/lib/Analysis/LazyCallGraph.cpp?rev=200903&view=auto
>>
>> ==============================================================================
>> --- llvm/trunk/lib/Analysis/LazyCallGraph.cpp (added)
>> +++ llvm/trunk/lib/Analysis/LazyCallGraph.cpp Wed Feb  5 22:37:03 2014
>> @@ -0,0 +1,195 @@
>> +//===- LazyCallGraph.cpp - Analysis of a Module's call graph
>> --------------===//
>> +//
>> +//                     The LLVM Compiler Infrastructure
>> +//
>> +// This file is distributed under the University of Illinois Open Source
>> +// License. See LICENSE.TXT for details.
>> +//
>>
>> +//===----------------------------------------------------------------------===//
>> +
>> +#include "llvm/Analysis/LazyCallGraph.h"
>> +#include "llvm/ADT/SCCIterator.h"
>> +#include "llvm/IR/Instructions.h"
>> +#include "llvm/IR/PassManager.h"
>> +#include "llvm/Support/CallSite.h"
>> +#include "llvm/Support/raw_ostream.h"
>> +#include "llvm/InstVisitor.h"
>> +
>> +using namespace llvm;
>> +
>> +static void findCallees(
>> +    SmallVectorImpl<Constant *> &Worklist, SmallPtrSetImpl<Constant *>
>> &Visited,
>> +    SmallVectorImpl<PointerUnion<Function *, LazyCallGraph::Node *> >
>> &Callees,
>> +    SmallPtrSetImpl<Function *> &CalleeSet) {
>> +  while (!Worklist.empty()) {
>> +    Constant *C = Worklist.pop_back_val();
>> +
>> +    if (Function *F = dyn_cast<Function>(C)) {
>> +      // Note that we consider *any* function with a definition to be a
>> viable
>> +      // edge. Even if the function's definition is subject to
>> replacement by
>> +      // some other module (say, a weak definition) there may still be
>> +      // optimizations which essentially speculate based on the
>> definition and
>> +      // a way to check that the specific definition is in fact the one
>> being
>> +      // used. For example, this could be done by moving the weak
>> definition to
>> +      // a strong (internal) definition and making the weak definition
>> be an
>> +      // alias. Then a test of the address of the weak function against
>> the new
>> +      // strong definition's address would be an effective way to
>> determine the
>> +      // safety of optimizing a direct call edge.
>> +      if (!F->isDeclaration() && CalleeSet.insert(F))
>> +          Callees.push_back(F);
>> +      continue;
>> +    }
>> +
>> +    for (User::value_op_iterator OI = C->value_op_begin(),
>> +                                 OE = C->value_op_end();
>> +         OI != OE; ++OI)
>> +      if (Visited.insert(cast<Constant>(*OI)))
>> +        Worklist.push_back(cast<Constant>(*OI));
>> +  }
>> +}
>> +
>> +LazyCallGraph::Node::Node(LazyCallGraph &G, Function &F) : G(G), F(F) {
>> +  SmallVector<Constant *, 16> Worklist;
>> +  SmallPtrSet<Constant *, 16> Visited;
>> +  // Find all the potential callees in this function. First walk the
>> +  // instructions and add every operand which is a constant to the
>> worklist.
>> +  for (Function::iterator BBI = F.begin(), BBE = F.end(); BBI != BBE;
>> ++BBI)
>> +    for (BasicBlock::iterator II = BBI->begin(), IE = BBI->end(); II !=
>> IE;
>> +         ++II)
>> +      for (User::value_op_iterator OI = II->value_op_begin(),
>> +                                   OE = II->value_op_end();
>> +           OI != OE; ++OI)
>> +        if (Constant *C = dyn_cast<Constant>(*OI))
>> +          if (Visited.insert(C))
>> +            Worklist.push_back(C);
>> +
>> +  // We've collected all the constant (and thus potentially function or
>> +  // function containing) operands to all of the instructions in the
>> function.
>> +  // Process them (recursively) collecting every function found.
>> +  findCallees(Worklist, Visited, Callees, CalleeSet);
>> +}
>> +
>> +LazyCallGraph::Node::Node(LazyCallGraph &G, const Node &OtherN)
>> +    : G(G), F(OtherN.F), CalleeSet(OtherN.CalleeSet) {
>> +  // Loop over the other node's callees, adding the Function*s to our
>> list
>> +  // directly, and recursing to add the Node*s.
>> +  Callees.reserve(OtherN.Callees.size());
>> +  for (NodeVectorImplT::iterator OI = OtherN.Callees.begin(),
>> +                                 OE = OtherN.Callees.end();
>> +       OI != OE; ++OI)
>> +    if (Function *Callee = OI->dyn_cast<Function *>())
>> +      Callees.push_back(Callee);
>> +    else
>> +      Callees.push_back(G.copyInto(*OI->get<Node *>()));
>> +}
>> +
>> +#if LLVM_HAS_RVALUE_REFERENCES
>> +LazyCallGraph::Node::Node(LazyCallGraph &G, Node &&OtherN)
>> +    : G(G), F(OtherN.F), Callees(std::move(OtherN.Callees)),
>> +      CalleeSet(std::move(OtherN.CalleeSet)) {
>> +  // Loop over our Callees. They've been moved from another node, but we
>> need
>> +  // to move the Node*s to live under our bump ptr allocator.
>> +  for (NodeVectorImplT::iterator CI = Callees.begin(), CE =
>> Callees.end();
>> +       CI != CE; ++CI)
>> +    if (Node *ChildN = CI->dyn_cast<Node *>())
>> +      *CI = G.moveInto(std::move(*ChildN));
>> +}
>> +#endif
>> +
>> +LazyCallGraph::LazyCallGraph(Module &M) : M(M) {
>> +  for (Module::iterator FI = M.begin(), FE = M.end(); FI != FE; ++FI)
>> +    if (!FI->isDeclaration() && !FI->hasLocalLinkage())
>> +      if (EntryNodeSet.insert(&*FI))
>> +        EntryNodes.push_back(&*FI);
>> +
>> +  // Now add entry nodes for functions reachable via initializers to
>> globals.
>> +  SmallVector<Constant *, 16> Worklist;
>> +  SmallPtrSet<Constant *, 16> Visited;
>> +  for (Module::global_iterator GI = M.global_begin(), GE =
>> M.global_end(); GI != GE; ++GI)
>> +    if (GI->hasInitializer())
>> +      if (Visited.insert(GI->getInitializer()))
>> +        Worklist.push_back(GI->getInitializer());
>> +
>> +  findCallees(Worklist, Visited, EntryNodes, EntryNodeSet);
>> +}
>> +
>> +LazyCallGraph::LazyCallGraph(const LazyCallGraph &G)
>> +    : M(G.M), EntryNodeSet(G.EntryNodeSet) {
>> +  EntryNodes.reserve(EntryNodes.size());
>> +  for (NodeVectorImplT::iterator EI = EntryNodes.begin(),
>> +                                 EE = EntryNodes.end();
>> +       EI != EE; ++EI)
>> +    if (Function *Callee = EI->dyn_cast<Function *>())
>> +      EntryNodes.push_back(Callee);
>> +    else
>> +      EntryNodes.push_back(copyInto(*EI->get<Node *>()));
>> +}
>> +
>> +#if LLVM_HAS_RVALUE_REFERENCES
>> +// FIXME: This would be crazy simpler if BumpPtrAllocator were movable
>> without
>> +// invalidating any of the allocated memory. We should make that be the
>> case at
>> +// some point and delete this.
>> +LazyCallGraph::LazyCallGraph(LazyCallGraph &&G)
>> +    : M(G.M), EntryNodes(std::move(G.EntryNodes)),
>> +      EntryNodeSet(std::move(G.EntryNodeSet)) {
>> +  // Loop over our EntryNodes. They've been moved from another graph,
>> but we
>> +  // need to move the Node*s to live under our bump ptr allocator.
>> +  for (NodeVectorImplT::iterator EI = EntryNodes.begin(), EE =
>> EntryNodes.end();
>> +       EI != EE; ++EI)
>> +    if (Node *EntryN = EI->dyn_cast<Node *>())
>> +      *EI = G.moveInto(std::move(*EntryN));
>> +}
>> +#endif
>> +
>> +LazyCallGraph::Node *LazyCallGraph::insertInto(Function &F, Node
>> *&MappedN) {
>> +  return new (MappedN = BPA.Allocate()) Node(*this, F);
>> +}
>> +
>> +LazyCallGraph::Node *LazyCallGraph::copyInto(const Node &OtherN) {
>> +  Node *&N = NodeMap[&OtherN.F];
>> +  if (N)
>> +    return N;
>> +
>> +  return new (N = BPA.Allocate()) Node(*this, OtherN);
>> +}
>> +
>> +#if LLVM_HAS_RVALUE_REFERENCES
>> +LazyCallGraph::Node *LazyCallGraph::moveInto(Node &&OtherN) {
>> +  Node *&N = NodeMap[&OtherN.F];
>> +  if (N)
>> +    return N;
>> +
>> +  return new (N = BPA.Allocate()) Node(*this, std::move(OtherN));
>> +}
>> +#endif
>> +
>> +char LazyCallGraphAnalysis::PassID;
>> +
>> +LazyCallGraphPrinterPass::LazyCallGraphPrinterPass(raw_ostream &OS) :
>> OS(OS) {}
>> +
>> +static void printNodes(raw_ostream &OS, LazyCallGraph::Node &N,
>> +                       SmallPtrSetImpl<LazyCallGraph::Node *> &Printed) {
>> +  // Recurse depth first through the nodes.
>> +  for (LazyCallGraph::iterator I = N.begin(), E = N.end(); I != E; ++I)
>> +    if (Printed.insert(*I))
>> +      printNodes(OS, **I, Printed);
>> +
>> +  OS << "  Call edges in function: " << N.getFunction().getName() <<
>> "\n";
>> +  for (LazyCallGraph::iterator I = N.begin(), E = N.end(); I != E; ++I)
>> +    OS << "    -> " << I->getFunction().getName() << "\n";
>> +
>> +  OS << "\n";
>> +}
>> +
>> +PreservedAnalyses LazyCallGraphPrinterPass::run(Module *M,
>> ModuleAnalysisManager *AM) {
>> +  LazyCallGraph &G = AM->getResult<LazyCallGraphAnalysis>(M);
>> +
>> +  OS << "Printing the call graph for module: " <<
>> M->getModuleIdentifier() << "\n\n";
>> +
>> +  SmallPtrSet<LazyCallGraph::Node *, 16> Printed;
>> +  for (LazyCallGraph::iterator I = G.begin(), E = G.end(); I != E; ++I)
>> +    if (Printed.insert(*I))
>> +      printNodes(OS, **I, Printed);
>> +
>> +  return PreservedAnalyses::all();
>> +}
>>
>> Added: llvm/trunk/test/Analysis/LazyCallGraph/basic.ll
>> URL:
>> http://llvm.org/viewvc/llvm-project/llvm/trunk/test/Analysis/LazyCallGraph/basic.ll?rev=200903&view=auto
>>
>> ==============================================================================
>> --- llvm/trunk/test/Analysis/LazyCallGraph/basic.ll (added)
>> +++ llvm/trunk/test/Analysis/LazyCallGraph/basic.ll Wed Feb  5 22:37:03
>> 2014
>> @@ -0,0 +1,126 @@
>> +; RUN: opt -disable-output -passes=print-cg %s 2>&1 | FileCheck %s
>> +;
>> +; Basic validation of the call graph analysis used in the new pass
>> manager.
>> +
>> +define void @f() {
>> +; CHECK-LABEL: Call edges in function: f
>> +; CHECK-NOT: ->
>> +
>> +entry:
>> +  ret void
>> +}
>> +
>> +; A bunch more functions just to make it easier to test several call
>> edges at once.
>> +define void @f1() {
>> +  ret void
>> +}
>> +define void @f2() {
>> +  ret void
>> +}
>> +define void @f3() {
>> +  ret void
>> +}
>> +define void @f4() {
>> +  ret void
>> +}
>> +define void @f5() {
>> +  ret void
>> +}
>> +define void @f6() {
>> +  ret void
>> +}
>> +define void @f7() {
>> +  ret void
>> +}
>> +define void @f8() {
>> +  ret void
>> +}
>> +define void @f9() {
>> +  ret void
>> +}
>> +define void @f10() {
>> +  ret void
>> +}
>> +define void @f11() {
>> +  ret void
>> +}
>> +define void @f12() {
>> +  ret void
>> +}
>> +
>> +declare i32 @__gxx_personality_v0(...)
>> +
>> +define void @test0() {
>> +; CHECK-LABEL: Call edges in function: test0
>> +; CHECK-NEXT: -> f
>> +; CHECK-NOT: ->
>> +
>> +entry:
>> +  call void @f()
>> +  call void @f()
>> +  call void @f()
>> +  call void @f()
>> +  ret void
>> +}
>> +
>> +define void ()* @test1(void ()** %x) {
>> +; CHECK-LABEL: Call edges in function: test1
>> +; CHECK-NEXT: -> f12
>> +; CHECK-NEXT: -> f11
>> +; CHECK-NEXT: -> f10
>> +; CHECK-NEXT: -> f7
>> +; CHECK-NEXT: -> f9
>> +; CHECK-NEXT: -> f8
>> +; CHECK-NEXT: -> f6
>> +; CHECK-NEXT: -> f5
>> +; CHECK-NEXT: -> f4
>> +; CHECK-NEXT: -> f3
>> +; CHECK-NEXT: -> f2
>> +; CHECK-NEXT: -> f1
>> +; CHECK-NOT: ->
>> +
>> +entry:
>> +  br label %next
>> +
>> +dead:
>> +  br label %next
>> +
>> +next:
>> +  phi void ()* [ @f1, %entry ], [ @f2, %dead ]
>> +  select i1 true, void ()* @f3, void ()* @f4
>> +  store void ()* @f5, void ()** %x
>> +  call void @f6()
>> +  call void (void ()*, void ()*)* bitcast (void ()* @f7 to void (void
>> ()*, void ()*)*)(void ()* @f8, void ()* @f9)
>> +  invoke void @f10() to label %exit unwind label %unwind
>> +
>> +exit:
>> +  ret void ()* @f11
>> +
>> +unwind:
>> +  %res = landingpad { i8*, i32 } personality i32 (...)*
>> @__gxx_personality_v0
>> +          cleanup
>> +  resume { i8*, i32 } { i8* bitcast (void ()* @f12 to i8*), i32 42 }
>> +}
>> +
>> + at g = global void ()* @f1
>> + at g1 = global [4 x void ()*] [void ()* @f2, void ()* @f3, void ()* @f4,
>> void ()* @f5]
>> + at g2 = global {i8, void ()*, i8} {i8 1, void ()* @f6, i8 2}
>> + at h = constant void ()* @f7
>> +
>> +define void @test2() {
>> +; CHECK-LABEL: Call edges in function: test2
>> +; CHECK-NEXT: -> f7
>> +; CHECK-NEXT: -> f6
>> +; CHECK-NEXT: -> f5
>> +; CHECK-NEXT: -> f4
>> +; CHECK-NEXT: -> f3
>> +; CHECK-NEXT: -> f2
>> +; CHECK-NEXT: -> f1
>> +; CHECK-NOT: ->
>> +
>> +  load i8** bitcast (void ()** @g to i8**)
>> +  load i8** bitcast (void ()** getelementptr ([4 x void ()*]* @g1, i32
>> 0, i32 2) to i8**)
>> +  load i8** bitcast (void ()** getelementptr ({i8, void ()*, i8}* @g2,
>> i32 0, i32 1) to i8**)
>> +  load i8** bitcast (void ()** @h to i8**)
>> +  ret void
>> +}
>>
>> Modified: llvm/trunk/tools/opt/NewPMDriver.cpp
>> URL:
>> http://llvm.org/viewvc/llvm-project/llvm/trunk/tools/opt/NewPMDriver.cpp?rev=200903&r1=200902&r2=200903&view=diff
>>
>> ==============================================================================
>> --- llvm/trunk/tools/opt/NewPMDriver.cpp (original)
>> +++ llvm/trunk/tools/opt/NewPMDriver.cpp Wed Feb  5 22:37:03 2014
>> @@ -16,6 +16,7 @@
>>  #include "NewPMDriver.h"
>>  #include "Passes.h"
>>  #include "llvm/ADT/StringRef.h"
>> +#include "llvm/Analysis/LazyCallGraph.h"
>>  #include "llvm/Bitcode/BitcodeWriterPass.h"
>>  #include "llvm/IR/IRPrintingPasses.h"
>>  #include "llvm/IR/LLVMContext.h"
>> @@ -35,6 +36,10 @@ bool llvm::runPassPipeline(StringRef Arg
>>    FunctionAnalysisManager FAM;
>>    ModuleAnalysisManager MAM;
>>
>> +  // FIXME: Lift this registration of analysis passes into a .def file
>> adjacent
>> +  // to the one used to associate names with passes.
>> +  MAM.registerPass(LazyCallGraphAnalysis());
>> +
>>    // Cross register the analysis managers through their proxies.
>>    MAM.registerPass(FunctionAnalysisManagerModuleProxy(FAM));
>>    FAM.registerPass(ModuleAnalysisManagerFunctionProxy(MAM));
>>
>> Modified: llvm/trunk/tools/opt/Passes.cpp
>> URL:
>> http://llvm.org/viewvc/llvm-project/llvm/trunk/tools/opt/Passes.cpp?rev=200903&r1=200902&r2=200903&view=diff
>>
>> ==============================================================================
>> --- llvm/trunk/tools/opt/Passes.cpp (original)
>> +++ llvm/trunk/tools/opt/Passes.cpp Wed Feb  5 22:37:03 2014
>> @@ -15,6 +15,7 @@
>>
>>  //===----------------------------------------------------------------------===//
>>
>>  #include "Passes.h"
>> +#include "llvm/Analysis/LazyCallGraph.h"
>>  #include "llvm/IR/IRPrintingPasses.h"
>>  #include "llvm/IR/PassManager.h"
>>  #include "llvm/IR/Verifier.h"
>> @@ -43,6 +44,7 @@ struct NoOpFunctionPass {
>>  static bool isModulePassName(StringRef Name) {
>>    if (Name == "no-op-module") return true;
>>    if (Name == "print") return true;
>> +  if (Name == "print-cg") return true;
>>
>>    return false;
>>  }
>> @@ -63,6 +65,10 @@ static bool parseModulePassName(ModulePa
>>      MPM.addPass(PrintModulePass(dbgs()));
>>      return true;
>>    }
>> +  if (Name == "print-cg") {
>> +    MPM.addPass(LazyCallGraphPrinterPass(dbgs()));
>> +    return true;
>> +  }
>>    return false;
>>  }
>>
>>
>>
>> _______________________________________________
>> llvm-commits mailing list
>> llvm-commits at cs.uiuc.edu
>> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
>>
>
>
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20140206/1a4602b9/attachment.html>


More information about the llvm-commits mailing list