[llvm] fe755af - Revert "Remove PlaceSafepoints pass"

Philip Reames via llvm-commits llvm-commits at lists.llvm.org
Thu Oct 13 07:17:37 PDT 2022


Author: Philip Reames
Date: 2022-10-13T07:17:25-07:00
New Revision: fe755af3a9a56a23494ed231d1b007a9a7b91174

URL: https://github.com/llvm/llvm-project/commit/fe755af3a9a56a23494ed231d1b007a9a7b91174
DIFF: https://github.com/llvm/llvm-project/commit/fe755af3a9a56a23494ed231d1b007a9a7b91174.diff

LOG: Revert "Remove PlaceSafepoints pass"

This reverts commit cb66e123c6bc82a793300b6fb3ecbed79c58f557.  It was reported via https://reviews.llvm.org/rGcb66e123c6bc82a793300b6fb3ecbed79c58f557#1132969 that the Microsoft.NET compiler is still using this pass.

Added: 
    llvm/lib/Transforms/Scalar/PlaceSafepoints.cpp
    llvm/test/Transforms/PlaceSafepoints/basic.ll
    llvm/test/Transforms/PlaceSafepoints/call-in-loop.ll
    llvm/test/Transforms/PlaceSafepoints/finite-loops.ll
    llvm/test/Transforms/PlaceSafepoints/libcall.ll
    llvm/test/Transforms/PlaceSafepoints/memset.ll
    llvm/test/Transforms/PlaceSafepoints/no-statepoints.ll
    llvm/test/Transforms/PlaceSafepoints/split-backedge.ll
    llvm/test/Transforms/PlaceSafepoints/statepoint-coreclr.ll
    llvm/test/Transforms/PlaceSafepoints/statepoint-frameescape.ll

Modified: 
    llvm/docs/GarbageCollection.rst
    llvm/docs/Statepoints.rst
    llvm/include/llvm/InitializePasses.h
    llvm/include/llvm/Transforms/Scalar.h
    llvm/lib/Transforms/Scalar/CMakeLists.txt
    llvm/lib/Transforms/Scalar/Scalar.cpp
    llvm/utils/gn/secondary/llvm/lib/Transforms/Scalar/BUILD.gn

Removed: 
    


################################################################################
diff  --git a/llvm/docs/GarbageCollection.rst b/llvm/docs/GarbageCollection.rst
index d32909c302e7c..06f934b92f144 100644
--- a/llvm/docs/GarbageCollection.rst
+++ b/llvm/docs/GarbageCollection.rst
@@ -505,7 +505,7 @@ The Statepoint Example GC
 
 This GC provides an example of how one might use the infrastructure provided
 by ``gc.statepoint``. This example GC is compatible with the
-:ref:`RewriteStatepointsForGC` utility passes
+:ref:`PlaceSafepoints` and :ref:`RewriteStatepointsForGC` utility passes
 which simplify ``gc.statepoint`` sequence insertion. If you need to build a
 custom GC strategy around the ``gc.statepoints`` mechanisms, it is recommended
 that you use this one as a starting point.

diff  --git a/llvm/docs/Statepoints.rst b/llvm/docs/Statepoints.rst
index 120c8f91564f5..25f0a093c458c 100644
--- a/llvm/docs/Statepoints.rst
+++ b/llvm/docs/Statepoints.rst
@@ -669,6 +669,72 @@ pointer and offset pairs. For example:
     i64 %length)
 
 
+.. _PlaceSafepoints:
+
+PlaceSafepoints
+^^^^^^^^^^^^^^^^
+
+The pass PlaceSafepoints inserts safepoint polls sufficient to ensure running
+code checks for a safepoint request on a timely manner. This pass is expected
+to be run before RewriteStatepointsForGC and thus does not produce full
+relocation sequences.
+
+As an example, given input IR of the following:
+
+.. code-block:: llvm
+
+  define void @test() gc "statepoint-example" {
+    call void @foo()
+    ret void
+  }
+
+  declare void @do_safepoint()
+  define void @gc.safepoint_poll() {
+    call void @do_safepoint()
+    ret void
+  }
+
+
+This pass would produce the following IR:
+
+.. code-block:: llvm
+
+  define void @test() gc "statepoint-example" {
+    call void @do_safepoint()
+    call void @foo()
+    ret void
+  }
+
+In this case, we've added an (unconditional) entry safepoint poll.  Note that
+despite appearances, the entry poll is not necessarily redundant.  We'd have to
+know that ``foo`` and ``test`` were not mutually recursive for the poll to be
+redundant.  In practice, you'd probably want to your poll definition to contain
+a conditional branch of some form.
+
+At the moment, PlaceSafepoints can insert safepoint polls at method entry and
+loop backedges locations.  Extending this to work with return polls would be
+straight forward if desired.
+
+PlaceSafepoints includes a number of optimizations to avoid placing safepoint
+polls at particular sites unless needed to ensure timely execution of a poll
+under normal conditions.  PlaceSafepoints does not attempt to ensure timely
+execution of a poll under worst case conditions such as heavy system paging.
+
+The implementation of a safepoint poll action is specified by looking up a
+function of the name ``gc.safepoint_poll`` in the containing Module.  The body
+of this function is inserted at each poll site desired.  While calls or invokes
+inside this method are transformed to a ``gc.statepoints``, recursive poll
+insertion is not performed.
+
+This pass is useful for any language frontend which only has to support
+garbage collection semantics at safepoints.  If you need other abstract
+frame information at safepoints (e.g. for deoptimization or introspection),
+you can insert safepoint polls in the frontend.  If you have the later case,
+please ask on llvm-dev for suggestions.  There's been a good amount of work
+done on making such a scheme work well in practice which is not yet documented
+here.
+
+
 Supported Architectures
 =======================
 

diff  --git a/llvm/include/llvm/InitializePasses.h b/llvm/include/llvm/InitializePasses.h
index e7a5d0a218d17..3dd8195b83195 100644
--- a/llvm/include/llvm/InitializePasses.h
+++ b/llvm/include/llvm/InitializePasses.h
@@ -332,6 +332,8 @@ void initializePatchableFunctionPass(PassRegistry&);
 void initializePeepholeOptimizerPass(PassRegistry&);
 void initializePhiValuesWrapperPassPass(PassRegistry&);
 void initializePhysicalRegisterUsageInfoPass(PassRegistry&);
+void initializePlaceBackedgeSafepointsImplPass(PassRegistry&);
+void initializePlaceSafepointsPass(PassRegistry&);
 void initializePostDomOnlyPrinterWrapperPassPass(PassRegistry &);
 void initializePostDomOnlyViewerWrapperPassPass(PassRegistry &);
 void initializePostDomPrinterWrapperPassPass(PassRegistry &);

diff  --git a/llvm/include/llvm/Transforms/Scalar.h b/llvm/include/llvm/Transforms/Scalar.h
index e2175569995a7..5f852963c687d 100644
--- a/llvm/include/llvm/Transforms/Scalar.h
+++ b/llvm/include/llvm/Transforms/Scalar.h
@@ -462,6 +462,16 @@ FunctionPass *createSpeculativeExecutionIfHasBranchDivergencePass();
 //
 FunctionPass *createStraightLineStrengthReducePass();
 
+//===----------------------------------------------------------------------===//
+//
+// PlaceSafepoints - Rewrite any IR calls to gc.statepoints and insert any
+// safepoint polls (method entry, backedge) that might be required.  This pass
+// does not generate explicit relocation sequences - that's handled by
+// RewriteStatepointsForGC which can be run at an arbitrary point in the pass
+// order following this pass.
+//
+FunctionPass *createPlaceSafepointsPass();
+
 //===----------------------------------------------------------------------===//
 //
 // RewriteStatepointsForGC - Rewrite any gc.statepoints which do not yet have

diff  --git a/llvm/lib/Transforms/Scalar/CMakeLists.txt b/llvm/lib/Transforms/Scalar/CMakeLists.txt
index 8c1ad8f300481..eb008c15903a7 100644
--- a/llvm/lib/Transforms/Scalar/CMakeLists.txt
+++ b/llvm/lib/Transforms/Scalar/CMakeLists.txt
@@ -59,6 +59,7 @@ add_llvm_component_library(LLVMScalarOpts
   NaryReassociate.cpp
   NewGVN.cpp
   PartiallyInlineLibCalls.cpp
+  PlaceSafepoints.cpp
   Reassociate.cpp
   Reg2Mem.cpp
   RewriteStatepointsForGC.cpp

diff  --git a/llvm/lib/Transforms/Scalar/PlaceSafepoints.cpp b/llvm/lib/Transforms/Scalar/PlaceSafepoints.cpp
new file mode 100644
index 0000000000000..e1cc3fc71c3e4
--- /dev/null
+++ b/llvm/lib/Transforms/Scalar/PlaceSafepoints.cpp
@@ -0,0 +1,691 @@
+//===- PlaceSafepoints.cpp - Place GC Safepoints --------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// Place garbage collection safepoints at appropriate locations in the IR. This
+// does not make relocation semantics or variable liveness explicit.  That's
+// done by RewriteStatepointsForGC.
+//
+// Terminology:
+// - A call is said to be "parseable" if there is a stack map generated for the
+// return PC of the call.  A runtime can determine where values listed in the
+// deopt arguments and (after RewriteStatepointsForGC) gc arguments are located
+// on the stack when the code is suspended inside such a call.  Every parse
+// point is represented by a call wrapped in an gc.statepoint intrinsic.
+// - A "poll" is an explicit check in the generated code to determine if the
+// runtime needs the generated code to cooperate by calling a helper routine
+// and thus suspending its execution at a known state. The call to the helper
+// routine will be parseable.  The (gc & runtime specific) logic of a poll is
+// assumed to be provided in a function of the name "gc.safepoint_poll".
+//
+// We aim to insert polls such that running code can quickly be brought to a
+// well defined state for inspection by the collector.  In the current
+// implementation, this is done via the insertion of poll sites at method entry
+// and the backedge of most loops.  We try to avoid inserting more polls than
+// are necessary to ensure a finite period between poll sites.  This is not
+// because the poll itself is expensive in the generated code; it's not.  Polls
+// do tend to impact the optimizer itself in negative ways; we'd like to avoid
+// perturbing the optimization of the method as much as we can.
+//
+// We also need to make most call sites parseable.  The callee might execute a
+// poll (or otherwise be inspected by the GC).  If so, the entire stack
+// (including the suspended frame of the current method) must be parseable.
+//
+// This pass will insert:
+// - Call parse points ("call safepoints") for any call which may need to
+// reach a safepoint during the execution of the callee function.
+// - Backedge safepoint polls and entry safepoint polls to ensure that
+// executing code reaches a safepoint poll in a finite amount of time.
+//
+// We do not currently support return statepoints, but adding them would not
+// be hard.  They are not required for correctness - entry safepoints are an
+// alternative - but some GCs may prefer them.  Patches welcome.
+//
+//===----------------------------------------------------------------------===//
+
+#include "llvm/InitializePasses.h"
+#include "llvm/Pass.h"
+
+#include "llvm/ADT/SetVector.h"
+#include "llvm/ADT/Statistic.h"
+#include "llvm/Analysis/CFG.h"
+#include "llvm/Analysis/LoopInfo.h"
+#include "llvm/Analysis/ScalarEvolution.h"
+#include "llvm/Analysis/TargetLibraryInfo.h"
+#include "llvm/IR/Dominators.h"
+#include "llvm/IR/IntrinsicInst.h"
+#include "llvm/IR/LegacyPassManager.h"
+#include "llvm/IR/Statepoint.h"
+#include "llvm/Support/CommandLine.h"
+#include "llvm/Support/Debug.h"
+#include "llvm/Transforms/Scalar.h"
+#include "llvm/Transforms/Utils/BasicBlockUtils.h"
+#include "llvm/Transforms/Utils/Cloning.h"
+#include "llvm/Transforms/Utils/Local.h"
+
+#define DEBUG_TYPE "safepoint-placement"
+
+STATISTIC(NumEntrySafepoints, "Number of entry safepoints inserted");
+STATISTIC(NumBackedgeSafepoints, "Number of backedge safepoints inserted");
+
+STATISTIC(CallInLoop,
+          "Number of loops without safepoints due to calls in loop");
+STATISTIC(FiniteExecution,
+          "Number of loops without safepoints finite execution");
+
+using namespace llvm;
+
+// Ignore opportunities to avoid placing safepoints on backedges, useful for
+// validation
+static cl::opt<bool> AllBackedges("spp-all-backedges", cl::Hidden,
+                                  cl::init(false));
+
+/// How narrow does the trip count of a loop have to be to have to be considered
+/// "counted"?  Counted loops do not get safepoints at backedges.
+static cl::opt<int> CountedLoopTripWidth("spp-counted-loop-trip-width",
+                                         cl::Hidden, cl::init(32));
+
+// If true, split the backedge of a loop when placing the safepoint, otherwise
+// split the latch block itself.  Both are useful to support for
+// experimentation, but in practice, it looks like splitting the backedge
+// optimizes better.
+static cl::opt<bool> SplitBackedge("spp-split-backedge", cl::Hidden,
+                                   cl::init(false));
+
+namespace {
+
+/// An analysis pass whose purpose is to identify each of the backedges in
+/// the function which require a safepoint poll to be inserted.
+struct PlaceBackedgeSafepointsImpl : public FunctionPass {
+  static char ID;
+
+  /// The output of the pass - gives a list of each backedge (described by
+  /// pointing at the branch) which need a poll inserted.
+  std::vector<Instruction *> PollLocations;
+
+  /// True unless we're running spp-no-calls in which case we need to disable
+  /// the call-dependent placement opts.
+  bool CallSafepointsEnabled;
+
+  ScalarEvolution *SE = nullptr;
+  DominatorTree *DT = nullptr;
+  LoopInfo *LI = nullptr;
+  TargetLibraryInfo *TLI = nullptr;
+
+  PlaceBackedgeSafepointsImpl(bool CallSafepoints = false)
+      : FunctionPass(ID), CallSafepointsEnabled(CallSafepoints) {
+    initializePlaceBackedgeSafepointsImplPass(*PassRegistry::getPassRegistry());
+  }
+
+  bool runOnLoop(Loop *);
+  void runOnLoopAndSubLoops(Loop *L) {
+    // Visit all the subloops
+    for (Loop *I : *L)
+      runOnLoopAndSubLoops(I);
+    runOnLoop(L);
+  }
+
+  bool runOnFunction(Function &F) override {
+    SE = &getAnalysis<ScalarEvolutionWrapperPass>().getSE();
+    DT = &getAnalysis<DominatorTreeWrapperPass>().getDomTree();
+    LI = &getAnalysis<LoopInfoWrapperPass>().getLoopInfo();
+    TLI = &getAnalysis<TargetLibraryInfoWrapperPass>().getTLI(F);
+    for (Loop *I : *LI) {
+      runOnLoopAndSubLoops(I);
+    }
+    return false;
+  }
+
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.addRequired<DominatorTreeWrapperPass>();
+    AU.addRequired<ScalarEvolutionWrapperPass>();
+    AU.addRequired<LoopInfoWrapperPass>();
+    AU.addRequired<TargetLibraryInfoWrapperPass>();
+    // We no longer modify the IR at all in this pass.  Thus all
+    // analysis are preserved.
+    AU.setPreservesAll();
+  }
+};
+}
+
+static cl::opt<bool> NoEntry("spp-no-entry", cl::Hidden, cl::init(false));
+static cl::opt<bool> NoCall("spp-no-call", cl::Hidden, cl::init(false));
+static cl::opt<bool> NoBackedge("spp-no-backedge", cl::Hidden, cl::init(false));
+
+namespace {
+struct PlaceSafepoints : public FunctionPass {
+  static char ID; // Pass identification, replacement for typeid
+
+  PlaceSafepoints() : FunctionPass(ID) {
+    initializePlaceSafepointsPass(*PassRegistry::getPassRegistry());
+  }
+  bool runOnFunction(Function &F) override;
+
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    // We modify the graph wholesale (inlining, block insertion, etc).  We
+    // preserve nothing at the moment.  We could potentially preserve dom tree
+    // if that was worth doing
+    AU.addRequired<TargetLibraryInfoWrapperPass>();
+  }
+};
+}
+
+// Insert a safepoint poll immediately before the given instruction.  Does
+// not handle the parsability of state at the runtime call, that's the
+// callers job.
+static void
+InsertSafepointPoll(Instruction *InsertBefore,
+                    std::vector<CallBase *> &ParsePointsNeeded /*rval*/,
+                    const TargetLibraryInfo &TLI);
+
+static bool needsStatepoint(CallBase *Call, const TargetLibraryInfo &TLI) {
+  if (callsGCLeafFunction(Call, TLI))
+    return false;
+  if (auto *CI = dyn_cast<CallInst>(Call)) {
+    if (CI->isInlineAsm())
+      return false;
+  }
+
+  return !(isa<GCStatepointInst>(Call) || isa<GCRelocateInst>(Call) ||
+           isa<GCResultInst>(Call));
+}
+
+/// Returns true if this loop is known to contain a call safepoint which
+/// must unconditionally execute on any iteration of the loop which returns
+/// to the loop header via an edge from Pred.  Returns a conservative correct
+/// answer; i.e. false is always valid.
+static bool containsUnconditionalCallSafepoint(Loop *L, BasicBlock *Header,
+                                               BasicBlock *Pred,
+                                               DominatorTree &DT,
+                                               const TargetLibraryInfo &TLI) {
+  // In general, we're looking for any cut of the graph which ensures
+  // there's a call safepoint along every edge between Header and Pred.
+  // For the moment, we look only for the 'cuts' that consist of a single call
+  // instruction in a block which is dominated by the Header and dominates the
+  // loop latch (Pred) block.  Somewhat surprisingly, walking the entire chain
+  // of such dominating blocks gets substantially more occurrences than just
+  // checking the Pred and Header blocks themselves.  This may be due to the
+  // density of loop exit conditions caused by range and null checks.
+  // TODO: structure this as an analysis pass, cache the result for subloops,
+  // avoid dom tree recalculations
+  assert(DT.dominates(Header, Pred) && "loop latch not dominated by header?");
+
+  BasicBlock *Current = Pred;
+  while (true) {
+    for (Instruction &I : *Current) {
+      if (auto *Call = dyn_cast<CallBase>(&I))
+        // Note: Technically, needing a safepoint isn't quite the right
+        // condition here.  We should instead be checking if the target method
+        // has an
+        // unconditional poll. In practice, this is only a theoretical concern
+        // since we don't have any methods with conditional-only safepoint
+        // polls.
+        if (needsStatepoint(Call, TLI))
+          return true;
+    }
+
+    if (Current == Header)
+      break;
+    Current = DT.getNode(Current)->getIDom()->getBlock();
+  }
+
+  return false;
+}
+
+/// Returns true if this loop is known to terminate in a finite number of
+/// iterations.  Note that this function may return false for a loop which
+/// does actual terminate in a finite constant number of iterations due to
+/// conservatism in the analysis.
+static bool mustBeFiniteCountedLoop(Loop *L, ScalarEvolution *SE,
+                                    BasicBlock *Pred) {
+  // A conservative bound on the loop as a whole.
+  const SCEV *MaxTrips = SE->getConstantMaxBackedgeTakenCount(L);
+  if (!isa<SCEVCouldNotCompute>(MaxTrips) &&
+      SE->getUnsignedRange(MaxTrips).getUnsignedMax().isIntN(
+          CountedLoopTripWidth))
+    return true;
+
+  // If this is a conditional branch to the header with the alternate path
+  // being outside the loop, we can ask questions about the execution frequency
+  // of the exit block.
+  if (L->isLoopExiting(Pred)) {
+    // This returns an exact expression only.  TODO: We really only need an
+    // upper bound here, but SE doesn't expose that.
+    const SCEV *MaxExec = SE->getExitCount(L, Pred);
+    if (!isa<SCEVCouldNotCompute>(MaxExec) &&
+        SE->getUnsignedRange(MaxExec).getUnsignedMax().isIntN(
+            CountedLoopTripWidth))
+        return true;
+  }
+
+  return /* not finite */ false;
+}
+
+static void scanOneBB(Instruction *Start, Instruction *End,
+                      std::vector<CallInst *> &Calls,
+                      DenseSet<BasicBlock *> &Seen,
+                      std::vector<BasicBlock *> &Worklist) {
+  for (BasicBlock::iterator BBI(Start), BBE0 = Start->getParent()->end(),
+                                        BBE1 = BasicBlock::iterator(End);
+       BBI != BBE0 && BBI != BBE1; BBI++) {
+    if (CallInst *CI = dyn_cast<CallInst>(&*BBI))
+      Calls.push_back(CI);
+
+    // FIXME: This code does not handle invokes
+    assert(!isa<InvokeInst>(&*BBI) &&
+           "support for invokes in poll code needed");
+
+    // Only add the successor blocks if we reach the terminator instruction
+    // without encountering end first
+    if (BBI->isTerminator()) {
+      BasicBlock *BB = BBI->getParent();
+      for (BasicBlock *Succ : successors(BB)) {
+        if (Seen.insert(Succ).second) {
+          Worklist.push_back(Succ);
+        }
+      }
+    }
+  }
+}
+
+static void scanInlinedCode(Instruction *Start, Instruction *End,
+                            std::vector<CallInst *> &Calls,
+                            DenseSet<BasicBlock *> &Seen) {
+  Calls.clear();
+  std::vector<BasicBlock *> Worklist;
+  Seen.insert(Start->getParent());
+  scanOneBB(Start, End, Calls, Seen, Worklist);
+  while (!Worklist.empty()) {
+    BasicBlock *BB = Worklist.back();
+    Worklist.pop_back();
+    scanOneBB(&*BB->begin(), End, Calls, Seen, Worklist);
+  }
+}
+
+bool PlaceBackedgeSafepointsImpl::runOnLoop(Loop *L) {
+  // Loop through all loop latches (branches controlling backedges).  We need
+  // to place a safepoint on every backedge (potentially).
+  // Note: In common usage, there will be only one edge due to LoopSimplify
+  // having run sometime earlier in the pipeline, but this code must be correct
+  // w.r.t. loops with multiple backedges.
+  BasicBlock *Header = L->getHeader();
+  SmallVector<BasicBlock*, 16> LoopLatches;
+  L->getLoopLatches(LoopLatches);
+  for (BasicBlock *Pred : LoopLatches) {
+    assert(L->contains(Pred));
+
+    // Make a policy decision about whether this loop needs a safepoint or
+    // not.  Note that this is about unburdening the optimizer in loops, not
+    // avoiding the runtime cost of the actual safepoint.
+    if (!AllBackedges) {
+      if (mustBeFiniteCountedLoop(L, SE, Pred)) {
+        LLVM_DEBUG(dbgs() << "skipping safepoint placement in finite loop\n");
+        FiniteExecution++;
+        continue;
+      }
+      if (CallSafepointsEnabled &&
+          containsUnconditionalCallSafepoint(L, Header, Pred, *DT, *TLI)) {
+        // Note: This is only semantically legal since we won't do any further
+        // IPO or inlining before the actual call insertion..  If we hadn't, we
+        // might latter loose this call safepoint.
+        LLVM_DEBUG(
+            dbgs()
+            << "skipping safepoint placement due to unconditional call\n");
+        CallInLoop++;
+        continue;
+      }
+    }
+
+    // TODO: We can create an inner loop which runs a finite number of
+    // iterations with an outer loop which contains a safepoint.  This would
+    // not help runtime performance that much, but it might help our ability to
+    // optimize the inner loop.
+
+    // Safepoint insertion would involve creating a new basic block (as the
+    // target of the current backedge) which does the safepoint (of all live
+    // variables) and branches to the true header
+    Instruction *Term = Pred->getTerminator();
+
+    LLVM_DEBUG(dbgs() << "[LSP] terminator instruction: " << *Term);
+
+    PollLocations.push_back(Term);
+  }
+
+  return false;
+}
+
+/// Returns true if an entry safepoint is not required before this callsite in
+/// the caller function.
+static bool doesNotRequireEntrySafepointBefore(CallBase *Call) {
+  if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(Call)) {
+    switch (II->getIntrinsicID()) {
+    case Intrinsic::experimental_gc_statepoint:
+    case Intrinsic::experimental_patchpoint_void:
+    case Intrinsic::experimental_patchpoint_i64:
+      // The can wrap an actual call which may grow the stack by an unbounded
+      // amount or run forever.
+      return false;
+    default:
+      // Most LLVM intrinsics are things which do not expand to actual calls, or
+      // at least if they do, are leaf functions that cause only finite stack
+      // growth.  In particular, the optimizer likes to form things like memsets
+      // out of stores in the original IR.  Another important example is
+      // llvm.localescape which must occur in the entry block.  Inserting a
+      // safepoint before it is not legal since it could push the localescape
+      // out of the entry block.
+      return true;
+    }
+  }
+  return false;
+}
+
+static Instruction *findLocationForEntrySafepoint(Function &F,
+                                                  DominatorTree &DT) {
+
+  // Conceptually, this poll needs to be on method entry, but in
+  // practice, we place it as late in the entry block as possible.  We
+  // can place it as late as we want as long as it dominates all calls
+  // that can grow the stack.  This, combined with backedge polls,
+  // give us all the progress guarantees we need.
+
+  // hasNextInstruction and nextInstruction are used to iterate
+  // through a "straight line" execution sequence.
+
+  auto HasNextInstruction = [](Instruction *I) {
+    if (!I->isTerminator())
+      return true;
+
+    BasicBlock *nextBB = I->getParent()->getUniqueSuccessor();
+    return nextBB && (nextBB->getUniquePredecessor() != nullptr);
+  };
+
+  auto NextInstruction = [&](Instruction *I) {
+    assert(HasNextInstruction(I) &&
+           "first check if there is a next instruction!");
+
+    if (I->isTerminator())
+      return &I->getParent()->getUniqueSuccessor()->front();
+    return &*++I->getIterator();
+  };
+
+  Instruction *Cursor = nullptr;
+  for (Cursor = &F.getEntryBlock().front(); HasNextInstruction(Cursor);
+       Cursor = NextInstruction(Cursor)) {
+
+    // We need to ensure a safepoint poll occurs before any 'real' call.  The
+    // easiest way to ensure finite execution between safepoints in the face of
+    // recursive and mutually recursive functions is to enforce that each take
+    // a safepoint.  Additionally, we need to ensure a poll before any call
+    // which can grow the stack by an unbounded amount.  This isn't required
+    // for GC semantics per se, but is a common requirement for languages
+    // which detect stack overflow via guard pages and then throw exceptions.
+    if (auto *Call = dyn_cast<CallBase>(Cursor)) {
+      if (doesNotRequireEntrySafepointBefore(Call))
+        continue;
+      break;
+    }
+  }
+
+  assert((HasNextInstruction(Cursor) || Cursor->isTerminator()) &&
+         "either we stopped because of a call, or because of terminator");
+
+  return Cursor;
+}
+
+const char GCSafepointPollName[] = "gc.safepoint_poll";
+
+static bool isGCSafepointPoll(Function &F) {
+  return F.getName().equals(GCSafepointPollName);
+}
+
+/// Returns true if this function should be rewritten to include safepoint
+/// polls and parseable call sites.  The main point of this function is to be
+/// an extension point for custom logic.
+static bool shouldRewriteFunction(Function &F) {
+  // TODO: This should check the GCStrategy
+  if (F.hasGC()) {
+    const auto &FunctionGCName = F.getGC();
+    const StringRef StatepointExampleName("statepoint-example");
+    const StringRef CoreCLRName("coreclr");
+    return (StatepointExampleName == FunctionGCName) ||
+           (CoreCLRName == FunctionGCName);
+  } else
+    return false;
+}
+
+// TODO: These should become properties of the GCStrategy, possibly with
+// command line overrides.
+static bool enableEntrySafepoints(Function &F) { return !NoEntry; }
+static bool enableBackedgeSafepoints(Function &F) { return !NoBackedge; }
+static bool enableCallSafepoints(Function &F) { return !NoCall; }
+
+bool PlaceSafepoints::runOnFunction(Function &F) {
+  if (F.isDeclaration() || F.empty()) {
+    // This is a declaration, nothing to do.  Must exit early to avoid crash in
+    // dom tree calculation
+    return false;
+  }
+
+  if (isGCSafepointPoll(F)) {
+    // Given we're inlining this inside of safepoint poll insertion, this
+    // doesn't make any sense.  Note that we do make any contained calls
+    // parseable after we inline a poll.
+    return false;
+  }
+
+  if (!shouldRewriteFunction(F))
+    return false;
+
+  const TargetLibraryInfo &TLI =
+      getAnalysis<TargetLibraryInfoWrapperPass>().getTLI(F);
+
+  bool Modified = false;
+
+  // In various bits below, we rely on the fact that uses are reachable from
+  // defs.  When there are basic blocks unreachable from the entry, dominance
+  // and reachablity queries return non-sensical results.  Thus, we preprocess
+  // the function to ensure these properties hold.
+  Modified |= removeUnreachableBlocks(F);
+
+  // STEP 1 - Insert the safepoint polling locations.  We do not need to
+  // actually insert parse points yet.  That will be done for all polls and
+  // calls in a single pass.
+
+  DominatorTree DT;
+  DT.recalculate(F);
+
+  SmallVector<Instruction *, 16> PollsNeeded;
+  std::vector<CallBase *> ParsePointNeeded;
+
+  if (enableBackedgeSafepoints(F)) {
+    // Construct a pass manager to run the LoopPass backedge logic.  We
+    // need the pass manager to handle scheduling all the loop passes
+    // appropriately.  Doing this by hand is painful and just not worth messing
+    // with for the moment.
+    legacy::FunctionPassManager FPM(F.getParent());
+    bool CanAssumeCallSafepoints = enableCallSafepoints(F);
+    auto *PBS = new PlaceBackedgeSafepointsImpl(CanAssumeCallSafepoints);
+    FPM.add(PBS);
+    FPM.run(F);
+
+    // We preserve dominance information when inserting the poll, otherwise
+    // we'd have to recalculate this on every insert
+    DT.recalculate(F);
+
+    auto &PollLocations = PBS->PollLocations;
+
+    auto OrderByBBName = [](Instruction *a, Instruction *b) {
+      return a->getParent()->getName() < b->getParent()->getName();
+    };
+    // We need the order of list to be stable so that naming ends up stable
+    // when we split edges.  This makes test cases much easier to write.
+    llvm::sort(PollLocations, OrderByBBName);
+
+    // We can sometimes end up with duplicate poll locations.  This happens if
+    // a single loop is visited more than once.   The fact this happens seems
+    // wrong, but it does happen for the split-backedge.ll test case.
+    PollLocations.erase(std::unique(PollLocations.begin(),
+                                    PollLocations.end()),
+                        PollLocations.end());
+
+    // Insert a poll at each point the analysis pass identified
+    // The poll location must be the terminator of a loop latch block.
+    for (Instruction *Term : PollLocations) {
+      // We are inserting a poll, the function is modified
+      Modified = true;
+
+      if (SplitBackedge) {
+        // Split the backedge of the loop and insert the poll within that new
+        // basic block.  This creates a loop with two latches per original
+        // latch (which is non-ideal), but this appears to be easier to
+        // optimize in practice than inserting the poll immediately before the
+        // latch test.
+
+        // Since this is a latch, at least one of the successors must dominate
+        // it. Its possible that we have a) duplicate edges to the same header
+        // and b) edges to distinct loop headers.  We need to insert pools on
+        // each.
+        SetVector<BasicBlock *> Headers;
+        for (unsigned i = 0; i < Term->getNumSuccessors(); i++) {
+          BasicBlock *Succ = Term->getSuccessor(i);
+          if (DT.dominates(Succ, Term->getParent())) {
+            Headers.insert(Succ);
+          }
+        }
+        assert(!Headers.empty() && "poll location is not a loop latch?");
+
+        // The split loop structure here is so that we only need to recalculate
+        // the dominator tree once.  Alternatively, we could just keep it up to
+        // date and use a more natural merged loop.
+        SetVector<BasicBlock *> SplitBackedges;
+        for (BasicBlock *Header : Headers) {
+          BasicBlock *NewBB = SplitEdge(Term->getParent(), Header, &DT);
+          PollsNeeded.push_back(NewBB->getTerminator());
+          NumBackedgeSafepoints++;
+        }
+      } else {
+        // Split the latch block itself, right before the terminator.
+        PollsNeeded.push_back(Term);
+        NumBackedgeSafepoints++;
+      }
+    }
+  }
+
+  if (enableEntrySafepoints(F)) {
+    if (Instruction *Location = findLocationForEntrySafepoint(F, DT)) {
+      PollsNeeded.push_back(Location);
+      Modified = true;
+      NumEntrySafepoints++;
+    }
+    // TODO: else we should assert that there was, in fact, a policy choice to
+    // not insert a entry safepoint poll.
+  }
+
+  // Now that we've identified all the needed safepoint poll locations, insert
+  // safepoint polls themselves.
+  for (Instruction *PollLocation : PollsNeeded) {
+    std::vector<CallBase *> RuntimeCalls;
+    InsertSafepointPoll(PollLocation, RuntimeCalls, TLI);
+    llvm::append_range(ParsePointNeeded, RuntimeCalls);
+  }
+
+  return Modified;
+}
+
+char PlaceBackedgeSafepointsImpl::ID = 0;
+char PlaceSafepoints::ID = 0;
+
+FunctionPass *llvm::createPlaceSafepointsPass() {
+  return new PlaceSafepoints();
+}
+
+INITIALIZE_PASS_BEGIN(PlaceBackedgeSafepointsImpl,
+                      "place-backedge-safepoints-impl",
+                      "Place Backedge Safepoints", false, false)
+INITIALIZE_PASS_DEPENDENCY(ScalarEvolutionWrapperPass)
+INITIALIZE_PASS_DEPENDENCY(DominatorTreeWrapperPass)
+INITIALIZE_PASS_DEPENDENCY(LoopInfoWrapperPass)
+INITIALIZE_PASS_END(PlaceBackedgeSafepointsImpl,
+                    "place-backedge-safepoints-impl",
+                    "Place Backedge Safepoints", false, false)
+
+INITIALIZE_PASS_BEGIN(PlaceSafepoints, "place-safepoints", "Place Safepoints",
+                      false, false)
+INITIALIZE_PASS_END(PlaceSafepoints, "place-safepoints", "Place Safepoints",
+                    false, false)
+
+static void
+InsertSafepointPoll(Instruction *InsertBefore,
+                    std::vector<CallBase *> &ParsePointsNeeded /*rval*/,
+                    const TargetLibraryInfo &TLI) {
+  BasicBlock *OrigBB = InsertBefore->getParent();
+  Module *M = InsertBefore->getModule();
+  assert(M && "must be part of a module");
+
+  // Inline the safepoint poll implementation - this will get all the branch,
+  // control flow, etc..  Most importantly, it will introduce the actual slow
+  // path call - where we need to insert a safepoint (parsepoint).
+
+  auto *F = M->getFunction(GCSafepointPollName);
+  assert(F && "gc.safepoint_poll function is missing");
+  assert(F->getValueType() ==
+         FunctionType::get(Type::getVoidTy(M->getContext()), false) &&
+         "gc.safepoint_poll declared with wrong type");
+  assert(!F->empty() && "gc.safepoint_poll must be a non-empty function");
+  CallInst *PollCall = CallInst::Create(F, "", InsertBefore);
+
+  // Record some information about the call site we're replacing
+  BasicBlock::iterator Before(PollCall), After(PollCall);
+  bool IsBegin = false;
+  if (Before == OrigBB->begin())
+    IsBegin = true;
+  else
+    Before--;
+
+  After++;
+  assert(After != OrigBB->end() && "must have successor");
+
+  // Do the actual inlining
+  InlineFunctionInfo IFI;
+  bool InlineStatus = InlineFunction(*PollCall, IFI).isSuccess();
+  assert(InlineStatus && "inline must succeed");
+  (void)InlineStatus; // suppress warning in release-asserts
+
+  // Check post-conditions
+  assert(IFI.StaticAllocas.empty() && "can't have allocs");
+
+  std::vector<CallInst *> Calls; // new calls
+  DenseSet<BasicBlock *> BBs;    // new BBs + insertee
+
+  // Include only the newly inserted instructions, Note: begin may not be valid
+  // if we inserted to the beginning of the basic block
+  BasicBlock::iterator Start = IsBegin ? OrigBB->begin() : std::next(Before);
+
+  // If your poll function includes an unreachable at the end, that's not
+  // valid.  Bugpoint likes to create this, so check for it.
+  assert(isPotentiallyReachable(&*Start, &*After) &&
+         "malformed poll function");
+
+  scanInlinedCode(&*Start, &*After, Calls, BBs);
+  assert(!Calls.empty() && "slow path not found for safepoint poll");
+
+  // Record the fact we need a parsable state at the runtime call contained in
+  // the poll function.  This is required so that the runtime knows how to
+  // parse the last frame when we actually take  the safepoint (i.e. execute
+  // the slow path)
+  assert(ParsePointsNeeded.empty());
+  for (auto *CI : Calls) {
+    // No safepoint needed or wanted
+    if (!needsStatepoint(CI, TLI))
+      continue;
+
+    // These are likely runtime calls.  Should we assert that via calling
+    // convention or something?
+    ParsePointsNeeded.push_back(CI);
+  }
+  assert(ParsePointsNeeded.size() <= Calls.size());
+}

diff  --git a/llvm/lib/Transforms/Scalar/Scalar.cpp b/llvm/lib/Transforms/Scalar/Scalar.cpp
index c68ca13dfe7cc..5ab9e25577d8e 100644
--- a/llvm/lib/Transforms/Scalar/Scalar.cpp
+++ b/llvm/lib/Transforms/Scalar/Scalar.cpp
@@ -104,6 +104,8 @@ void llvm::initializeScalarOpts(PassRegistry &Registry) {
   initializeSeparateConstOffsetFromGEPLegacyPassPass(Registry);
   initializeSpeculativeExecutionLegacyPassPass(Registry);
   initializeStraightLineStrengthReduceLegacyPassPass(Registry);
+  initializePlaceBackedgeSafepointsImplPass(Registry);
+  initializePlaceSafepointsPass(Registry);
   initializeFloat2IntLegacyPassPass(Registry);
   initializeLoopDistributeLegacyPass(Registry);
   initializeLoopLoadEliminationPass(Registry);

diff  --git a/llvm/test/Transforms/PlaceSafepoints/basic.ll b/llvm/test/Transforms/PlaceSafepoints/basic.ll
new file mode 100644
index 0000000000000..aed54aa006ac4
--- /dev/null
+++ b/llvm/test/Transforms/PlaceSafepoints/basic.ll
@@ -0,0 +1,77 @@
+; RUN: opt < %s -S -place-safepoints -enable-new-pm=0 | FileCheck %s
+
+
+; Do we insert a simple entry safepoint?
+define void @test_entry() gc "statepoint-example" {
+; CHECK-LABEL: @test_entry
+entry:
+; CHECK-LABEL: entry
+; CHECK: call void @do_safepoint
+  ret void
+}
+
+; On a non-gc function, we should NOT get an entry safepoint
+define void @test_negative() {
+; CHECK-LABEL: @test_negative
+entry:
+; CHECK-NOT: do_safepoint
+  ret void
+}
+
+; Do we insert a backedge safepoint in a statically
+; infinite loop?
+define void @test_backedge() gc "statepoint-example" {
+; CHECK-LABEL: test_backedge
+entry:
+; CHECK-LABEL: entry
+; This statepoint is technically not required, but we don't exploit that yet.
+; CHECK: call void @do_safepoint
+  br label %other
+
+; CHECK-LABEL: other
+; CHECK: call void @do_safepoint
+other:
+  br label %other
+}
+
+; Check that we remove an unreachable block rather than trying
+; to insert a backedge safepoint
+define void @test_unreachable() gc "statepoint-example" {
+; CHECK-LABEL: test_unreachable
+entry:
+; CHECK-LABEL: entry
+; CHECK: call void @do_safepoint
+  ret void
+
+; CHECK-NOT: other
+; CHECK-NOT: do_safepoint
+other:
+  br label %other
+}
+
+declare void @foo()
+
+declare zeroext i1 @i1_return_i1(i1)
+
+define i1 @test_call_with_result() gc "statepoint-example" {
+; CHECK-LABEL: test_call_with_result
+; This is checking that a statepoint_poll is inserted for a function
+; that takes 1 argument.
+; CHECK: call void @do_safepoint
+entry:
+  %call1 = tail call i1 (i1) @i1_return_i1(i1 false)
+  ret i1 %call1
+}
+
+; This function is inlined when inserting a poll.  To avoid recursive 
+; issues, make sure we don't place safepoints in it.
+declare void @do_safepoint()
+define void @gc.safepoint_poll() {
+; CHECK-LABEL: gc.safepoint_poll
+; CHECK-LABEL: entry
+; CHECK-NEXT: do_safepoint
+; CHECK-NEXT: ret void 
+entry:
+  call void @do_safepoint()
+  ret void
+}

diff  --git a/llvm/test/Transforms/PlaceSafepoints/call-in-loop.ll b/llvm/test/Transforms/PlaceSafepoints/call-in-loop.ll
new file mode 100644
index 0000000000000..2a1892828744f
--- /dev/null
+++ b/llvm/test/Transforms/PlaceSafepoints/call-in-loop.ll
@@ -0,0 +1,30 @@
+; If there's a call in the loop which dominates the backedge, we 
+; don't need a safepoint poll (since the callee must contain a 
+; poll test).
+;; RUN: opt < %s -place-safepoints -S -enable-new-pm=0 | FileCheck %s
+
+declare void @foo()
+
+define void @test1() gc "statepoint-example" {
+; CHECK-LABEL: test1
+
+entry:
+; CHECK-LABEL: entry
+; CHECK: call void @do_safepoint
+  br label %loop
+
+loop:
+; CHECK-LABEL: loop
+; CHECK-NOT: call void @do_safepoint
+  call void @foo()
+  br label %loop
+}
+
+; This function is inlined when inserting a poll.
+declare void @do_safepoint()
+define void @gc.safepoint_poll() {
+; CHECK-LABEL: gc.safepoint_poll
+entry:
+  call void @do_safepoint()
+  ret void
+}

diff  --git a/llvm/test/Transforms/PlaceSafepoints/finite-loops.ll b/llvm/test/Transforms/PlaceSafepoints/finite-loops.ll
new file mode 100644
index 0000000000000..b226cc780cd43
--- /dev/null
+++ b/llvm/test/Transforms/PlaceSafepoints/finite-loops.ll
@@ -0,0 +1,143 @@
+; Tests to ensure that we are not placing backedge safepoints in
+; loops which are clearly finite.
+;; RUN: opt < %s -place-safepoints -spp-counted-loop-trip-width=32 -S -enable-new-pm=0 | FileCheck %s
+;; RUN: opt < %s -place-safepoints -spp-counted-loop-trip-width=64 -S -enable-new-pm=0 | FileCheck %s -check-prefix=COUNTED-64
+
+
+; A simple counted loop with trivially known range
+define void @test1(i32) gc "statepoint-example" {
+; CHECK-LABEL: test1
+; CHECK-LABEL: entry
+; CHECK: call void @do_safepoint
+; CHECK-LABEL: loop
+; CHECK-NOT: call void @do_safepoint
+; CHECK-LABEL: exit
+
+entry:
+  br label %loop
+
+loop:
+  %counter = phi i32 [ 0 , %entry ], [ %counter.inc , %loop ]
+  %counter.inc = add i32 %counter, 1
+  %counter.cmp = icmp slt i32 %counter.inc, 16
+  br i1 %counter.cmp, label %loop, label %exit
+
+exit:
+  ret void
+}
+
+; The same counted loop, but with an unknown early exit
+define void @test2(i32) gc "statepoint-example" {
+; CHECK-LABEL: test2
+; CHECK-LABEL: entry
+; CHECK: call void @do_safepoint
+; CHECK-LABEL: loop
+; CHECK-NOT: call void @do_safepoint
+; CHECK-LABEL: exit
+
+entry:
+  br label %loop
+
+loop:
+  %counter = phi i32 [ 0 , %entry ], [ %counter.inc , %continue ]
+  %counter.inc = add i32 %counter, 1
+  %counter.cmp = icmp slt i32 %counter.inc, 16
+  br i1 undef, label %continue, label %exit
+
+continue:
+  br i1 %counter.cmp, label %loop, label %exit
+
+exit:
+  ret void
+}
+
+; The range is a 8 bit value and we can't overflow
+define void @test3(i8 %upper) gc "statepoint-example" {
+; CHECK-LABEL: test3
+; CHECK-LABEL: entry
+; CHECK: call void @do_safepoint
+; CHECK-LABEL: loop
+; CHECK-NOT: call void @do_safepoint
+; CHECK-LABEL: exit
+
+entry:
+  br label %loop
+
+loop:
+  %counter = phi i8 [ 0 , %entry ], [ %counter.inc , %loop ]
+  %counter.inc = add nsw i8 %counter, 1
+  %counter.cmp = icmp slt i8 %counter.inc, %upper
+  br i1 %counter.cmp, label %loop, label %exit
+
+exit:
+  ret void
+}
+
+; The range is a 64 bit value
+define void @test4(i64 %upper) gc "statepoint-example" {
+; CHECK-LABEL: test4
+; CHECK-LABEL: entry
+; CHECK: call void @do_safepoint
+; CHECK-LABEL: loop
+; CHECK: call void @do_safepoint
+; CHECK-LABEL: exit
+
+; COUNTED-64-LABEL: test4
+; COUNTED-64-LABEL: entry
+; COUNTED-64: call void @do_safepoint
+; COUNTED-64-LABEL: loop
+; COUNTED-64-NOT: call void @do_safepoint
+; COUNTED-64-LABEL: exit
+
+entry:
+  br label %loop
+
+loop:
+  %counter = phi i64 [ 0 , %entry ], [ %counter.inc , %loop ]
+  %counter.inc = add i64 %counter, 1
+  %counter.cmp = icmp slt i64 %counter.inc, %upper
+  br i1 %counter.cmp, label %loop, label %exit
+
+exit:
+  ret void
+}
+
+; This loop can run infinitely (for %upper == INT64_MAX) so it needs a
+; safepoint.
+define void @test5(i64 %upper) gc "statepoint-example" {
+; CHECK-LABEL: test5
+; CHECK-LABEL: entry
+; CHECK: call void @do_safepoint
+; CHECK-LABEL: loop
+; CHECK: call void @do_safepoint
+; CHECK-LABEL: exit
+
+; COUNTED-64-LABEL: test5
+; COUNTED-64-LABEL: entry
+; COUNTED-64: call void @do_safepoint
+; COUNTED-64-LABEL: loop
+; COUNTED-64: call void @do_safepoint
+; COUNTED-64-LABEL: exit
+
+entry:
+  br label %loop
+
+loop:
+  %counter = phi i64 [ 0 , %entry ], [ %counter.inc , %loop ]
+  %counter.inc = add i64 %counter, 1
+  %counter.cmp = icmp sle i64 %counter.inc, %upper
+  br i1 %counter.cmp, label %loop, label %exit
+
+exit:
+  ret void
+}
+
+
+; This function is inlined when inserting a poll.
+declare void @do_safepoint()
+define void @gc.safepoint_poll() {
+; CHECK-LABEL: gc.safepoint_poll
+entry:
+  call void @do_safepoint()
+  ret void
+}

diff  --git a/llvm/test/Transforms/PlaceSafepoints/libcall.ll b/llvm/test/Transforms/PlaceSafepoints/libcall.ll
new file mode 100644
index 0000000000000..58184e5cb4a38
--- /dev/null
+++ b/llvm/test/Transforms/PlaceSafepoints/libcall.ll
@@ -0,0 +1,37 @@
+; RUN: opt -S -place-safepoints < %s -enable-new-pm=0 | FileCheck %s
+
+; Libcalls will not contain a safepoint poll, so check that we insert
+; a safepoint in a loop containing a libcall.
+declare double @ldexp(double %x, i32 %n) nounwind readnone
+define double @test_libcall(double %x) gc "statepoint-example" {
+; CHECK-LABEL: test_libcall
+
+entry:
+; CHECK: entry
+; CHECK-NEXT: call void @do_safepoint
+; CHECK-NEXT: br label %loop
+  br label %loop
+
+loop:
+; CHECK: loop
+; CHECK-NEXT: %x_loop = phi double [ %x, %entry ], [ %x_exp, %loop ]
+; CHECK-NEXT: %x_exp = call double @ldexp(double %x_loop, i32 5)
+; CHECK-NEXT: %done = fcmp ogt double %x_exp, 1.5
+; CHECK-NEXT: call void @do_safepoint
+  %x_loop = phi double [ %x, %entry ], [ %x_exp, %loop ]
+  %x_exp = call double @ldexp(double %x_loop, i32 5) nounwind readnone
+  %done = fcmp ogt double %x_exp, 1.5
+  br i1 %done, label %end, label %loop
+end:
+  %x_end = phi double [%x_exp, %loop]
+  ret double %x_end
+}
+
+; This function is inlined when inserting a poll.
+declare void @do_safepoint()
+define void @gc.safepoint_poll() {
+; CHECK-LABEL: gc.safepoint_poll
+entry:
+  call void @do_safepoint()
+  ret void
+}

diff  --git a/llvm/test/Transforms/PlaceSafepoints/memset.ll b/llvm/test/Transforms/PlaceSafepoints/memset.ll
new file mode 100644
index 0000000000000..d5d8ec91ecbc6
--- /dev/null
+++ b/llvm/test/Transforms/PlaceSafepoints/memset.ll
@@ -0,0 +1,20 @@
+; RUN: opt < %s -S -place-safepoints -enable-new-pm=0 | FileCheck %s
+
+define void @test(i32, i8 addrspace(1)* %ptr) gc "statepoint-example" {
+; CHECK-LABEL: @test
+; CHECK-NEXT: llvm.memset
+; CHECK: do_safepoint
+; CHECK: @foo
+  call void @llvm.memset.p1i8.i64(i8 addrspace(1)* align 8 %ptr, i8 0, i64 24, i1 false)
+  call void @foo()
+  ret void
+}
+
+declare void @foo()
+declare void @llvm.memset.p1i8.i64(i8 addrspace(1)*, i8, i64, i1)
+
+declare void @do_safepoint()
+define void @gc.safepoint_poll() {
+  call void @do_safepoint()
+  ret void
+}

diff  --git a/llvm/test/Transforms/PlaceSafepoints/no-statepoints.ll b/llvm/test/Transforms/PlaceSafepoints/no-statepoints.ll
new file mode 100644
index 0000000000000..a9220ce9ce632
--- /dev/null
+++ b/llvm/test/Transforms/PlaceSafepoints/no-statepoints.ll
@@ -0,0 +1,23 @@
+; RUN: opt -S -place-safepoints < %s -enable-new-pm=0 | FileCheck %s
+
+declare void @callee()
+
+define void @test() gc "statepoint-example" {
+; CHECK-LABEL: test(
+entry:
+; CHECK: entry:
+; CHECK: call void @do_safepoint()
+  br label %other
+
+other:
+; CHECK: other:
+  call void @callee() "gc-leaf-function"
+; CHECK: call void @do_safepoint()
+  br label %other
+}
+
+declare void @do_safepoint()
+define void @gc.safepoint_poll() {
+  call void @do_safepoint()
+  ret void
+}

diff  --git a/llvm/test/Transforms/PlaceSafepoints/split-backedge.ll b/llvm/test/Transforms/PlaceSafepoints/split-backedge.ll
new file mode 100644
index 0000000000000..feec6e0a6e22f
--- /dev/null
+++ b/llvm/test/Transforms/PlaceSafepoints/split-backedge.ll
@@ -0,0 +1,46 @@
+;; A very basic test to make sure that splitting the backedge keeps working
+;; RUN: opt < %s -place-safepoints -spp-split-backedge=1 -S -enable-new-pm=0 | FileCheck %s
+
+define void @test(i32, i1 %cond) gc "statepoint-example" {
+; CHECK-LABEL: @test
+; CHECK-LABEL: loop.loop_crit_edge
+; CHECK: call void @do_safepoint
+; CHECK-NEXT: br label %loop
+entry:
+  br label %loop
+
+loop:
+  br i1 %cond, label %loop, label %exit
+
+exit:
+  ret void
+}
+
+; Test for the case where a single conditional branch jumps to two
+; 
diff erent loop header blocks.  Since we're currently using LoopSimplfy
+; this doesn't hit the interesting case, but once we remove that, we need
+; to be sure this keeps working.
+define void @test2(i32, i1 %cond) gc "statepoint-example" {
+; CHECK-LABEL: @test2
+; CHECK-LABEL: loop2.loop2_crit_edge:
+; CHECK: call void @do_safepoint
+; CHECK-NEXT: br label %loop2
+; CHECK-LABEL: loop2.loop_crit_edge:
+; CHECK: call void @do_safepoint
+; CHECK-NEXT: br label %loop
+entry:
+  br label %loop
+
+loop:
+  br label %loop2
+
+loop2:
+  br i1 %cond, label %loop, label %loop2
+}
+
+declare void @do_safepoint()
+define void @gc.safepoint_poll() {
+entry:
+  call void @do_safepoint()
+  ret void
+}

diff  --git a/llvm/test/Transforms/PlaceSafepoints/statepoint-coreclr.ll b/llvm/test/Transforms/PlaceSafepoints/statepoint-coreclr.ll
new file mode 100644
index 0000000000000..40fbc2ec13c7d
--- /dev/null
+++ b/llvm/test/Transforms/PlaceSafepoints/statepoint-coreclr.ll
@@ -0,0 +1,29 @@
+; RUN: opt < %s -S -place-safepoints -enable-new-pm=0 | FileCheck %s
+
+; Basic test to make sure that safepoints are placed
+; for CoreCLR GC
+
+declare void @foo()
+
+define void @test_simple_call() gc "coreclr" {
+; CHECK-LABEL: test_simple_call
+entry:
+; CHECK: call void @do_safepoint
+  br label %other
+other:
+  call void @foo()
+  ret void
+}
+
+; This function is inlined when inserting a poll.  To avoid recursive
+; issues, make sure we don't place safepoints in it.
+declare void @do_safepoint()
+define void @gc.safepoint_poll() {
+; CHECK-LABEL: gc.safepoint_poll
+; CHECK-LABEL: entry
+; CHECK-NEXT: do_safepoint
+; CHECK-NEXT: ret void
+entry:
+  call void @do_safepoint()
+  ret void
+}

diff  --git a/llvm/test/Transforms/PlaceSafepoints/statepoint-frameescape.ll b/llvm/test/Transforms/PlaceSafepoints/statepoint-frameescape.ll
new file mode 100644
index 0000000000000..c97eb76fe657c
--- /dev/null
+++ b/llvm/test/Transforms/PlaceSafepoints/statepoint-frameescape.ll
@@ -0,0 +1,29 @@
+; RUN: opt < %s -S -place-safepoints -enable-new-pm=0 | FileCheck %s
+
+declare void @llvm.localescape(...)
+
+; Do we insert the entry safepoint after the localescape intrinsic?
+define void @parent() gc "statepoint-example" {
+; CHECK-LABEL: @parent
+entry:
+; CHECK-LABEL: entry
+; CHECK-NEXT: alloca
+; CHECK-NEXT: localescape
+; CHECK-NEXT: call void @do_safepoint
+  %ptr = alloca i32
+  call void (...) @llvm.localescape(i32* %ptr)
+  ret void
+}
+
+; This function is inlined when inserting a poll.  To avoid recursive 
+; issues, make sure we don't place safepoints in it.
+declare void @do_safepoint()
+define void @gc.safepoint_poll() {
+; CHECK-LABEL: gc.safepoint_poll
+; CHECK-LABEL: entry
+; CHECK-NEXT: do_safepoint
+; CHECK-NEXT: ret void 
+entry:
+  call void @do_safepoint()
+  ret void
+}

diff  --git a/llvm/utils/gn/secondary/llvm/lib/Transforms/Scalar/BUILD.gn b/llvm/utils/gn/secondary/llvm/lib/Transforms/Scalar/BUILD.gn
index b5791770ceb9e..58f7e05d92d61 100644
--- a/llvm/utils/gn/secondary/llvm/lib/Transforms/Scalar/BUILD.gn
+++ b/llvm/utils/gn/secondary/llvm/lib/Transforms/Scalar/BUILD.gn
@@ -70,6 +70,7 @@ static_library("Scalar") {
     "NaryReassociate.cpp",
     "NewGVN.cpp",
     "PartiallyInlineLibCalls.cpp",
+    "PlaceSafepoints.cpp",
     "Reassociate.cpp",
     "Reg2Mem.cpp",
     "RewriteStatepointsForGC.cpp",


        


More information about the llvm-commits mailing list