[Lldb-commits] [lldb] d7b3385 - Modify "statistics dump" to dump JSON.

Greg Clayton via lldb-commits lldb-commits at lists.llvm.org
Thu Oct 21 12:14:57 PDT 2021


Author: Greg Clayton
Date: 2021-10-21T12:14:21-07:00
New Revision: d7b338537cf360568474d31c2be86110ac22dc32

URL: https://github.com/llvm/llvm-project/commit/d7b338537cf360568474d31c2be86110ac22dc32
DIFF: https://github.com/llvm/llvm-project/commit/d7b338537cf360568474d31c2be86110ac22dc32.diff

LOG: Modify "statistics dump" to dump JSON.

This patch is a smaller version of a previous patch https://reviews.llvm.org/D110804.

This patch modifies the output of "statistics dump" to be able to get stats from the current target. It adds 3 new stats as well. The output of "statistics dump" is now emitted as JSON so that it can be used to track performance and statistics and the output could be used to populate a database that tracks performance. Sample output looks like:

(lldb) statistics dump
{
  "expressionEvaluation": {
    "failures": 0,
    "successes": 0
  },
  "firstStopTime": 0.34164492800000001,
  "frameVariable": {
    "failures": 0,
    "successes": 0
  },
  "launchOrAttachTime": 0.31969605400000001,
  "targetCreateTime": 0.0040863039999999998
}

The top level keys are:

"expressionEvaluation" which replaces the previous stats that were emitted as plain text. This dictionary contains the success and fail counts.
"frameVariable" which replaces the previous stats for "frame variable" that were emitted as plain text. This dictionary contains the success and fail counts.
"targetCreateTime" contains the number of seconds it took to create the target and load dependent libraries (if they were enabled) and also will contain symbol preloading times if that setting is enabled.
"launchOrAttachTime" is the time it takes from when the launch/attach is initiated to when the first private stop occurs.
"firstStopTime" is the time in seconds that it takes to stop at the first stop that is presented to the user via the LLDB interface. This value will only have meaning if you set a known breakpoint or stop location in your code that you want to measure as a performance test.

This diff is also meant as a place to discuess what we want out of the "statistics dump" command before adding more funcionality. It is also meant to clean up the previous code that was storting statistics in a vector of numbers within the lldb_private::Target class.

Differential Revision: https://reviews.llvm.org/D111686

Added: 
    lldb/include/lldb/Target/Statistics.h
    lldb/source/Target/Statistics.cpp

Modified: 
    lldb/include/lldb/Target/Process.h
    lldb/include/lldb/Target/Target.h
    lldb/packages/Python/lldbsuite/test/lldbtest.py
    lldb/source/API/SBTarget.cpp
    lldb/source/Commands/CommandObjectExpression.cpp
    lldb/source/Commands/CommandObjectFrame.cpp
    lldb/source/Commands/CommandObjectStats.cpp
    lldb/source/Commands/Options.td
    lldb/source/Target/CMakeLists.txt
    lldb/source/Target/Process.cpp
    lldb/source/Target/Target.cpp
    lldb/test/API/commands/statistics/basic/TestStats.py
    lldb/test/API/functionalities/stats_api/TestStatisticsAPI.py

Removed: 
    


################################################################################
diff  --git a/lldb/include/lldb/Target/Process.h b/lldb/include/lldb/Target/Process.h
index 7ac53d3336cdf..e9743ed17dd46 100644
--- a/lldb/include/lldb/Target/Process.h
+++ b/lldb/include/lldb/Target/Process.h
@@ -238,10 +238,11 @@ class ProcessModID {
 
   ~ProcessModID() = default;
 
-  void BumpStopID() {
-    m_stop_id++;
+  uint32_t BumpStopID() {
+    const uint32_t prev_stop_id = m_stop_id++;
     if (!IsLastResumeForUserExpression())
       m_last_natural_stop_id++;
+    return prev_stop_id;
   }
 
   void BumpMemoryID() { m_memory_id++; }

diff  --git a/lldb/include/lldb/Target/Statistics.h b/lldb/include/lldb/Target/Statistics.h
new file mode 100644
index 0000000000000..dfeefcbe74eac
--- /dev/null
+++ b/lldb/include/lldb/Target/Statistics.h
@@ -0,0 +1,118 @@
+//===-- Statistics.h --------------------------------------------*- C++ -*-===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#ifndef LLDB_TARGET_STATISTICS_H
+#define LLDB_TARGET_STATISTICS_H
+
+#include <chrono>
+#include <string>
+#include <vector>
+
+#include "lldb/Utility/Stream.h"
+#include "lldb/lldb-forward.h"
+#include "llvm/Support/JSON.h"
+
+namespace lldb_private {
+
+using StatsClock = std::chrono::high_resolution_clock;
+using StatsDuration = std::chrono::duration<double>;
+using StatsTimepoint = std::chrono::time_point<StatsClock>;
+
+/// A class that measures elapsed time in an exception safe way.
+///
+/// This is a RAII class is designed to help gather timing statistics within
+/// LLDB where objects have optional Duration variables that get updated with
+/// elapsed times. This helps LLDB measure statistics for many things that are
+/// then reported in LLDB commands.
+///
+/// Objects that need to measure elapsed times should have a variable of type
+/// "StatsDuration m_time_xxx;" which can then be used in the constructor of
+/// this class inside a scope that wants to measure something:
+///
+///   ElapsedTime elapsed(m_time_xxx);
+///   // Do some work
+///
+/// This class will increment the m_time_xxx variable with the elapsed time
+/// when the object goes out of scope. The "m_time_xxx" variable will be
+/// incremented when the class goes out of scope. This allows a variable to
+/// measure something that might happen in stages at 
diff erent times, like
+/// resolving a breakpoint each time a new shared library is loaded.
+class ElapsedTime {
+public:
+  /// Set to the start time when the object is created.
+  StatsTimepoint m_start_time;
+  /// Elapsed time in seconds to increment when this object goes out of scope.
+  StatsDuration &m_elapsed_time;
+
+public:
+  ElapsedTime(StatsDuration &opt_time) : m_elapsed_time(opt_time) {
+    m_start_time = StatsClock::now();
+  }
+  ~ElapsedTime() {
+    StatsDuration elapsed = StatsClock::now() - m_start_time;
+    m_elapsed_time += elapsed;
+  }
+};
+
+/// A class to count success/fail statistics.
+struct StatsSuccessFail {
+  StatsSuccessFail(llvm::StringRef n) : name(n.str()) {}
+
+  void NotifySuccess() { ++successes; }
+  void NotifyFailure() { ++failures; }
+
+  llvm::json::Value ToJSON() const;
+  std::string name;
+  uint32_t successes = 0;
+  uint32_t failures = 0;
+};
+
+/// A class that represents statistics for a since lldb_private::Target.
+class TargetStats {
+public:
+  llvm::json::Value ToJSON();
+
+  void SetLaunchOrAttachTime();
+  void SetFirstPrivateStopTime();
+  void SetFirstPublicStopTime();
+
+  StatsDuration &GetCreateTime() { return m_create_time; }
+  StatsSuccessFail &GetExpressionStats() { return m_expr_eval; }
+  StatsSuccessFail &GetFrameVariableStats() { return m_frame_var; }
+
+protected:
+  StatsDuration m_create_time{0.0};
+  llvm::Optional<StatsTimepoint> m_launch_or_attach_time;
+  llvm::Optional<StatsTimepoint> m_first_private_stop_time;
+  llvm::Optional<StatsTimepoint> m_first_public_stop_time;
+  StatsSuccessFail m_expr_eval{"expressionEvaluation"};
+  StatsSuccessFail m_frame_var{"frameVariable"};
+};
+
+class DebuggerStats {
+public:
+  static void SetCollectingStats(bool enable) { g_collecting_stats = enable; }
+  static bool GetCollectingStats() { return g_collecting_stats; }
+
+  /// Get metrics associated with all targets in a debugger in JSON format.
+  ///
+  /// \return
+  ///     Returns a JSON value that contains all target metrics.
+  static llvm::json::Value ReportStatistics(Debugger &debugger);
+
+protected:
+  // Collecting stats can be set to true to collect stats that are expensive
+  // to collect. By default all stats that are cheap to collect are enabled.
+  // This settings is here to maintain compatibility with "statistics enable"
+  // and "statistics disable".
+  static bool g_collecting_stats;
+};
+
+} // namespace lldb_private
+
+#endif // LLDB_TARGET_STATISTICS_H

diff  --git a/lldb/include/lldb/Target/Target.h b/lldb/include/lldb/Target/Target.h
index c8fc66db7ef4a..2f8be998771ff 100644
--- a/lldb/include/lldb/Target/Target.h
+++ b/lldb/include/lldb/Target/Target.h
@@ -28,6 +28,7 @@
 #include "lldb/Target/ExecutionContextScope.h"
 #include "lldb/Target/PathMappingList.h"
 #include "lldb/Target/SectionLoadHistory.h"
+#include "lldb/Target/Statistics.h"
 #include "lldb/Target/ThreadSpec.h"
 #include "lldb/Utility/ArchSpec.h"
 #include "lldb/Utility/Broadcaster.h"
@@ -1451,23 +1452,22 @@ class Target : public std::enable_shared_from_this<Target>,
 
   // Utilities for `statistics` command.
 private:
-  std::vector<uint32_t> m_stats_storage;
-  bool m_collecting_stats = false;
+  // Target metrics storage.
+  TargetStats m_stats;
 
 public:
-  void SetCollectingStats(bool v) { m_collecting_stats = v; }
-
-  bool GetCollectingStats() { return m_collecting_stats; }
-
-  void IncrementStats(lldb_private::StatisticKind key) {
-    if (!GetCollectingStats())
-      return;
-    lldbassert(key < lldb_private::StatisticKind::StatisticMax &&
-               "invalid statistics!");
-    m_stats_storage[key] += 1;
-  }
+  /// Get metrics associated with this target in JSON format.
+  ///
+  /// Target metrics help measure timings and information that is contained in
+  /// a target. These are designed to help measure performance of a debug
+  /// session as well as represent the current state of the target, like
+  /// information on the currently modules, currently set breakpoints and more.
+  ///
+  /// \return
+  ///     Returns a JSON value that contains all target metrics.
+  llvm::json::Value ReportStatistics();
 
-  std::vector<uint32_t> GetStatistics() { return m_stats_storage; }
+  TargetStats &GetStatistics() { return m_stats; }
 
 private:
   /// Construct with optional file and arch.

diff  --git a/lldb/packages/Python/lldbsuite/test/lldbtest.py b/lldb/packages/Python/lldbsuite/test/lldbtest.py
index 169f43f59c2f9..352ce6fabf165 100644
--- a/lldb/packages/Python/lldbsuite/test/lldbtest.py
+++ b/lldb/packages/Python/lldbsuite/test/lldbtest.py
@@ -750,10 +750,6 @@ def getBuildArtifact(self, name="a.out"):
         """Return absolute path to an artifact in the test's build directory."""
         return os.path.join(self.getBuildDir(), name)
 
-    def getSourcePath(self, name):
-        """Return absolute path to a file in the test's source directory."""
-        return os.path.join(self.getSourceDir(), name)
-
     @classmethod
     def setUpCommands(cls):
         commands = [

diff  --git a/lldb/source/API/SBTarget.cpp b/lldb/source/API/SBTarget.cpp
index af62d4e02eb10..a605d46e5233a 100644
--- a/lldb/source/API/SBTarget.cpp
+++ b/lldb/source/API/SBTarget.cpp
@@ -213,17 +213,9 @@ SBStructuredData SBTarget::GetStatistics() {
   TargetSP target_sp(GetSP());
   if (!target_sp)
     return LLDB_RECORD_RESULT(data);
-
-  auto stats_up = std::make_unique<StructuredData::Dictionary>();
-  int i = 0;
-  for (auto &Entry : target_sp->GetStatistics()) {
-    std::string Desc = lldb_private::GetStatDescription(
-        static_cast<lldb_private::StatisticKind>(i));
-    stats_up->AddIntegerItem(Desc, Entry);
-    i += 1;
-  }
-
-  data.m_impl_up->SetObjectSP(std::move(stats_up));
+  std::string json_str =
+      llvm::formatv("{0:2}", target_sp->ReportStatistics()).str();
+  data.m_impl_up->SetObjectSP(StructuredData::ParseJSON(json_str));
   return LLDB_RECORD_RESULT(data);
 }
 
@@ -233,7 +225,7 @@ void SBTarget::SetCollectingStats(bool v) {
   TargetSP target_sp(GetSP());
   if (!target_sp)
     return;
-  return target_sp->SetCollectingStats(v);
+  return DebuggerStats::SetCollectingStats(v);
 }
 
 bool SBTarget::GetCollectingStats() {
@@ -242,7 +234,7 @@ bool SBTarget::GetCollectingStats() {
   TargetSP target_sp(GetSP());
   if (!target_sp)
     return false;
-  return target_sp->GetCollectingStats();
+  return DebuggerStats::GetCollectingStats();
 }
 
 SBProcess SBTarget::LoadCore(const char *core_file) {

diff  --git a/lldb/source/Commands/CommandObjectExpression.cpp b/lldb/source/Commands/CommandObjectExpression.cpp
index a93cc15b0ff35..9d13ccab6d3ec 100644
--- a/lldb/source/Commands/CommandObjectExpression.cpp
+++ b/lldb/source/Commands/CommandObjectExpression.cpp
@@ -659,13 +659,8 @@ bool CommandObjectExpression::DoExecute(llvm::StringRef command,
         fixed_command.append(m_fixed_expression);
       history.AppendString(fixed_command);
     }
-    // Increment statistics to record this expression evaluation success.
-    target.IncrementStats(StatisticKind::ExpressionSuccessful);
     return true;
   }
-
-  // Increment statistics to record this expression evaluation failure.
-  target.IncrementStats(StatisticKind::ExpressionFailure);
   result.SetStatus(eReturnStatusFailed);
   return false;
 }

diff  --git a/lldb/source/Commands/CommandObjectFrame.cpp b/lldb/source/Commands/CommandObjectFrame.cpp
index 9e5d79ce6257a..2b9f5316409f7 100644
--- a/lldb/source/Commands/CommandObjectFrame.cpp
+++ b/lldb/source/Commands/CommandObjectFrame.cpp
@@ -708,11 +708,11 @@ class CommandObjectFrameVariable : public CommandObjectParsed {
 
     // Increment statistics.
     bool res = result.Succeeded();
-    Target &target = GetSelectedOrDummyTarget();
+    TargetStats &target_stats = GetSelectedOrDummyTarget().GetStatistics();
     if (res)
-      target.IncrementStats(StatisticKind::FrameVarSuccess);
+      target_stats.GetFrameVariableStats().NotifySuccess();
     else
-      target.IncrementStats(StatisticKind::FrameVarFailure);
+      target_stats.GetFrameVariableStats().NotifyFailure();
     return res;
   }
 

diff  --git a/lldb/source/Commands/CommandObjectStats.cpp b/lldb/source/Commands/CommandObjectStats.cpp
index 23c7dbbaf3730..70d63af8e8a6e 100644
--- a/lldb/source/Commands/CommandObjectStats.cpp
+++ b/lldb/source/Commands/CommandObjectStats.cpp
@@ -7,6 +7,8 @@
 //===----------------------------------------------------------------------===//
 
 #include "CommandObjectStats.h"
+#include "lldb/Core/Debugger.h"
+#include "lldb/Host/OptionParser.h"
 #include "lldb/Interpreter/CommandReturnObject.h"
 #include "lldb/Target/Target.h"
 
@@ -24,14 +26,12 @@ class CommandObjectStatsEnable : public CommandObjectParsed {
 
 protected:
   bool DoExecute(Args &command, CommandReturnObject &result) override {
-    Target &target = GetSelectedOrDummyTarget();
-
-    if (target.GetCollectingStats()) {
+    if (DebuggerStats::GetCollectingStats()) {
       result.AppendError("statistics already enabled");
       return false;
     }
 
-    target.SetCollectingStats(true);
+    DebuggerStats::SetCollectingStats(true);
     result.SetStatus(eReturnStatusSuccessFinishResult);
     return true;
   }
@@ -48,44 +48,75 @@ class CommandObjectStatsDisable : public CommandObjectParsed {
 
 protected:
   bool DoExecute(Args &command, CommandReturnObject &result) override {
-    Target &target = GetSelectedOrDummyTarget();
-
-    if (!target.GetCollectingStats()) {
+    if (!DebuggerStats::GetCollectingStats()) {
       result.AppendError("need to enable statistics before disabling them");
       return false;
     }
 
-    target.SetCollectingStats(false);
+    DebuggerStats::SetCollectingStats(false);
     result.SetStatus(eReturnStatusSuccessFinishResult);
     return true;
   }
 };
 
+#define LLDB_OPTIONS_statistics_dump
+#include "CommandOptions.inc"
+
 class CommandObjectStatsDump : public CommandObjectParsed {
+  class CommandOptions : public Options {
+  public:
+    CommandOptions() : Options() { OptionParsingStarting(nullptr); }
+
+    Status SetOptionValue(uint32_t option_idx, llvm::StringRef option_arg,
+                          ExecutionContext *execution_context) override {
+      Status error;
+      const int short_option = m_getopt_table[option_idx].val;
+
+      switch (short_option) {
+      case 'a':
+        m_all_targets = true;
+        break;
+      default:
+        llvm_unreachable("Unimplemented option");
+      }
+      return error;
+    }
+
+    void OptionParsingStarting(ExecutionContext *execution_context) override {
+      m_all_targets = false;
+    }
+
+    llvm::ArrayRef<OptionDefinition> GetDefinitions() override {
+      return llvm::makeArrayRef(g_statistics_dump_options);
+    }
+
+    bool m_all_targets = false;
+  };
+
 public:
   CommandObjectStatsDump(CommandInterpreter &interpreter)
-      : CommandObjectParsed(interpreter, "dump", "Dump statistics results",
-                            nullptr, eCommandProcessMustBePaused) {}
+      : CommandObjectParsed(
+            interpreter, "statistics dump", "Dump metrics in JSON format",
+            "statistics dump [<options>]", eCommandRequiresTarget) {}
 
   ~CommandObjectStatsDump() override = default;
 
+  Options *GetOptions() override { return &m_options; }
+
 protected:
   bool DoExecute(Args &command, CommandReturnObject &result) override {
-    Target &target = GetSelectedOrDummyTarget();
-
-    uint32_t i = 0;
-    for (auto &stat : target.GetStatistics()) {
-      result.AppendMessageWithFormat(
-          "%s : %u\n",
-          lldb_private::GetStatDescription(
-              static_cast<lldb_private::StatisticKind>(i))
-              .c_str(),
-          stat);
-      i += 1;
+    if (m_options.m_all_targets) {
+      result.AppendMessageWithFormatv(
+          "{0:2}", DebuggerStats::ReportStatistics(GetDebugger()));
+    } else {
+      Target &target = m_exe_ctx.GetTargetRef();
+      result.AppendMessageWithFormatv("{0:2}", target.ReportStatistics());
     }
     result.SetStatus(eReturnStatusSuccessFinishResult);
     return true;
   }
+
+  CommandOptions m_options;
 };
 
 CommandObjectStats::CommandObjectStats(CommandInterpreter &interpreter)

diff  --git a/lldb/source/Commands/Options.td b/lldb/source/Commands/Options.td
index b9f8950d2cace..3e89eb0f6bda1 100644
--- a/lldb/source/Commands/Options.td
+++ b/lldb/source/Commands/Options.td
@@ -1305,3 +1305,8 @@ let Command = "trace schema" in {
   def trace_schema_verbose : Option<"verbose", "v">, Group<1>,
     Desc<"Show verbose trace schema logging for debugging the plug-in.">;
 }
+
+let Command = "statistics dump" in {
+  def statistics_dump_all: Option<"all-targets", "a">, Group<1>,
+    Desc<"Include statistics for all targets.">;
+}

diff  --git a/lldb/source/Target/CMakeLists.txt b/lldb/source/Target/CMakeLists.txt
index f80dab69282b3..2353ddcba2495 100644
--- a/lldb/source/Target/CMakeLists.txt
+++ b/lldb/source/Target/CMakeLists.txt
@@ -39,6 +39,7 @@ add_lldb_library(lldbTarget
   StackFrameList.cpp
   StackFrameRecognizer.cpp
   StackID.cpp
+  Statistics.cpp
   StopInfo.cpp
   StructuredDataPlugin.cpp
   SystemRuntime.cpp

diff  --git a/lldb/source/Target/Process.cpp b/lldb/source/Target/Process.cpp
index e4dfe0a6e7353..bc2c4a1adb764 100644
--- a/lldb/source/Target/Process.cpp
+++ b/lldb/source/Target/Process.cpp
@@ -1297,6 +1297,17 @@ StateType Process::GetState() {
 }
 
 void Process::SetPublicState(StateType new_state, bool restarted) {
+  const bool new_state_is_stopped = StateIsStoppedState(new_state, false);
+  if (new_state_is_stopped) {
+    // This will only set the time if the public stop time has no value, so
+    // it is ok to call this multiple times. With a public stop we can't look
+    // at the stop ID because many private stops might have happened, so we
+    // can't check for a stop ID of zero. This allows the "statistics" command
+    // to dump the time it takes to reach somewhere in your code, like a
+    // breakpoint you set.
+    GetTarget().GetStatistics().SetFirstPublicStopTime();
+  }
+
   Log *log(lldb_private::GetLogIfAnyCategoriesSet(LIBLLDB_LOG_STATE |
                                                   LIBLLDB_LOG_PROCESS));
   LLDB_LOGF(log, "Process::SetPublicState (state = %s, restarted = %i)",
@@ -1315,7 +1326,6 @@ void Process::SetPublicState(StateType new_state, bool restarted) {
       m_public_run_lock.SetStopped();
     } else {
       const bool old_state_is_stopped = StateIsStoppedState(old_state, false);
-      const bool new_state_is_stopped = StateIsStoppedState(new_state, false);
       if ((old_state_is_stopped != new_state_is_stopped)) {
         if (new_state_is_stopped && !restarted) {
           LLDB_LOGF(log, "Process::SetPublicState (%s) -- unlocking run lock",
@@ -1446,7 +1456,9 @@ void Process::SetPrivateState(StateType new_state) {
       // before we get here.
       m_thread_list.DidStop();
 
-      m_mod_id.BumpStopID();
+      if (m_mod_id.BumpStopID() == 0)
+        GetTarget().GetStatistics().SetFirstPrivateStopTime();
+
       if (!m_mod_id.IsLastResumeForUserExpression())
         m_mod_id.SetStopEventForLastNaturalStopID(event_sp);
       m_memory_cache.Clear();

diff  --git a/lldb/source/Target/Statistics.cpp b/lldb/source/Target/Statistics.cpp
new file mode 100644
index 0000000000000..ce7a4f2f8c756
--- /dev/null
+++ b/lldb/source/Target/Statistics.cpp
@@ -0,0 +1,79 @@
+//===-- Statistics.cpp ----------------------------------------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include "lldb/Target/Statistics.h"
+
+#include "lldb/Core/Debugger.h"
+#include "lldb/Core/Module.h"
+#include "lldb/Symbol/SymbolFile.h"
+#include "lldb/Target/Target.h"
+
+using namespace lldb;
+using namespace lldb_private;
+using namespace llvm;
+
+json::Value StatsSuccessFail::ToJSON() const {
+  return json::Object{{"successes", successes}, {"failures", failures}};
+}
+
+static double elapsed(const StatsTimepoint &start, const StatsTimepoint &end) {
+  StatsDuration elapsed = end.time_since_epoch() - start.time_since_epoch();
+  return elapsed.count();
+}
+
+json::Value TargetStats::ToJSON() {
+  json::Object target_metrics_json{{m_expr_eval.name, m_expr_eval.ToJSON()},
+                                   {m_frame_var.name, m_frame_var.ToJSON()}};
+  if (m_launch_or_attach_time && m_first_private_stop_time) {
+    double elapsed_time =
+        elapsed(*m_launch_or_attach_time, *m_first_private_stop_time);
+    target_metrics_json.try_emplace("launchOrAttachTime", elapsed_time);
+  }
+  if (m_launch_or_attach_time && m_first_public_stop_time) {
+    double elapsed_time =
+        elapsed(*m_launch_or_attach_time, *m_first_public_stop_time);
+    target_metrics_json.try_emplace("firstStopTime", elapsed_time);
+  }
+  target_metrics_json.try_emplace("targetCreateTime", m_create_time.count());
+
+  return target_metrics_json;
+}
+
+void TargetStats::SetLaunchOrAttachTime() {
+  m_launch_or_attach_time = StatsClock::now();
+  m_first_private_stop_time = llvm::None;
+}
+
+void TargetStats::SetFirstPrivateStopTime() {
+  // Launching and attaching has many paths depending on if synchronous mode
+  // was used or if we are stopping at the entry point or not. Only set the
+  // first stop time if it hasn't already been set.
+  if (!m_first_private_stop_time)
+    m_first_private_stop_time = StatsClock::now();
+}
+
+void TargetStats::SetFirstPublicStopTime() {
+  // Launching and attaching has many paths depending on if synchronous mode
+  // was used or if we are stopping at the entry point or not. Only set the
+  // first stop time if it hasn't already been set.
+  if (!m_first_public_stop_time)
+    m_first_public_stop_time = StatsClock::now();
+}
+
+bool DebuggerStats::g_collecting_stats = false;
+
+llvm::json::Value DebuggerStats::ReportStatistics(Debugger &debugger) {
+  json::Array targets;
+  for (const auto &target : debugger.GetTargetList().Targets()) {
+    targets.emplace_back(target->ReportStatistics());
+  }
+  json::Object global_stats{
+      {"targets", std::move(targets)},
+  };
+  return std::move(global_stats);
+}

diff  --git a/lldb/source/Target/Target.cpp b/lldb/source/Target/Target.cpp
index 7f39da697eb0f..49170522c7a48 100644
--- a/lldb/source/Target/Target.cpp
+++ b/lldb/source/Target/Target.cpp
@@ -96,14 +96,10 @@ Target::Target(Debugger &debugger, const ArchSpec &target_arch,
       m_watchpoint_list(), m_process_sp(), m_search_filter_sp(),
       m_image_search_paths(ImageSearchPathsChanged, this),
       m_source_manager_up(), m_stop_hooks(), m_stop_hook_next_id(0),
-      m_latest_stop_hook_id(0),
-      m_valid(true), m_suppress_stop_hooks(false),
+      m_latest_stop_hook_id(0), m_valid(true), m_suppress_stop_hooks(false),
       m_is_dummy_target(is_dummy_target),
       m_frame_recognizer_manager_up(
-          std::make_unique<StackFrameRecognizerManager>()),
-      m_stats_storage(static_cast<int>(StatisticKind::StatisticMax))
-
-{
+          std::make_unique<StackFrameRecognizerManager>()) {
   SetEventName(eBroadcastBitBreakpointChanged, "breakpoint-changed");
   SetEventName(eBroadcastBitModulesLoaded, "modules-loaded");
   SetEventName(eBroadcastBitModulesUnloaded, "modules-unloaded");
@@ -1401,6 +1397,7 @@ void Target::SetExecutableModule(ModuleSP &executable_sp,
   ClearModules(false);
 
   if (executable_sp) {
+    ElapsedTime elapsed(m_stats.GetCreateTime());
     LLDB_SCOPED_TIMERF("Target::SetExecutableModule (executable = '%s')",
                        executable_sp->GetFileSpec().GetPath().c_str());
 
@@ -2390,8 +2387,10 @@ ExpressionResults Target::EvaluateExpression(
 
   ExpressionResults execution_results = eExpressionSetupError;
 
-  if (expr.empty())
+  if (expr.empty()) {
+    m_stats.GetExpressionStats().NotifyFailure();
     return execution_results;
+  }
 
   // We shouldn't run stop hooks in expressions.
   bool old_suppress_value = m_suppress_stop_hooks;
@@ -2436,6 +2435,10 @@ ExpressionResults Target::EvaluateExpression(
                                                  fixed_expression, ctx_obj);
   }
 
+  if (execution_results == eExpressionCompleted)
+    m_stats.GetExpressionStats().NotifySuccess();
+  else
+    m_stats.GetExpressionStats().NotifyFailure();
   return execution_results;
 }
 
@@ -2899,6 +2902,7 @@ bool Target::SetSectionUnloaded(const lldb::SectionSP &section_sp,
 void Target::ClearAllLoadedSections() { m_section_load_history.Clear(); }
 
 Status Target::Launch(ProcessLaunchInfo &launch_info, Stream *stream) {
+  m_stats.SetLaunchOrAttachTime();
   Status error;
   Log *log(lldb_private::GetLogIfAllCategoriesSet(LIBLLDB_LOG_TARGET));
 
@@ -3102,6 +3106,7 @@ llvm::Expected<TraceSP> Target::GetTraceOrCreate() {
 }
 
 Status Target::Attach(ProcessAttachInfo &attach_info, Stream *stream) {
+  m_stats.SetLaunchOrAttachTime();
   auto state = eStateInvalid;
   auto process_sp = GetProcessSP();
   if (process_sp) {
@@ -4447,3 +4452,6 @@ std::recursive_mutex &Target::GetAPIMutex() {
   else
     return m_mutex;
 }
+
+/// Get metrics associated with this target in JSON format.
+llvm::json::Value Target::ReportStatistics() { return m_stats.ToJSON(); }

diff  --git a/lldb/test/API/commands/statistics/basic/TestStats.py b/lldb/test/API/commands/statistics/basic/TestStats.py
index be028199fade1..5f9e7b0ca9e2b 100644
--- a/lldb/test/API/commands/statistics/basic/TestStats.py
+++ b/lldb/test/API/commands/statistics/basic/TestStats.py
@@ -1,4 +1,5 @@
 import lldb
+import json
 from lldbsuite.test.decorators import *
 from lldbsuite.test.lldbtest import *
 from lldbsuite.test import lldbutil
@@ -7,22 +8,87 @@ class TestCase(TestBase):
 
     mydir = TestBase.compute_mydir(__file__)
 
-    def test(self):
+    def setUp(self):
+        TestBase.setUp(self)
         self.build()
-        lldbutil.run_to_source_breakpoint(self, "// break here", lldb.SBFileSpec("main.c"))
 
-        self.expect("statistics disable", substrs=['need to enable statistics before disabling'], error=True)
+    NO_DEBUG_INFO_TESTCASE = True
+
+    def test_enable_disable(self):
+        """
+        Test "statistics disable" and "statistics enable". These don't do
+        anything anymore for cheap to gather statistics. In the future if
+        statistics are expensive to gather, we can enable the feature inside
+        of LLDB and test that enabling and disabling stops expesive information
+        from being gathered.
+        """
+        target = self.createTestTarget()
 
-        # 'expression' should change the statistics.
+        self.expect("statistics disable", substrs=['need to enable statistics before disabling'], error=True)
         self.expect("statistics enable")
         self.expect("statistics enable", substrs=['already enabled'], error=True)
-        self.expect("expr patatino", substrs=['27'])
         self.expect("statistics disable")
-        self.expect("statistics dump", substrs=['expr evaluation successes : 1\n',
-                                                'expr evaluation failures : 0\n'])
+        self.expect("statistics disable", substrs=['need to enable statistics before disabling'], error=True)
 
-        self.expect("statistics enable")
-        # Doesn't parse.
+    def verify_key_in_dict(self, key, d, description):
+        self.assertEqual(key in d, True,
+            'make sure key "%s" is in dictionary %s' % (key, description))
+
+    def verify_key_not_in_dict(self, key, d, description):
+        self.assertEqual(key in d, False,
+            'make sure key "%s" is in dictionary %s' % (key, description))
+
+    def verify_keys(self, dict, description, keys_exist, keys_missing=None):
+        """
+            Verify that all keys in "keys_exist" list are top level items in
+            "dict", and that all keys in "keys_missing" do not exist as top
+            level items in "dict".
+        """
+        if keys_exist:
+            for key in keys_exist:
+                self.verify_key_in_dict(key, dict, description)
+        if keys_missing:
+            for key in keys_missing:
+                self.verify_key_not_in_dict(key, dict, description)
+
+    def verify_success_fail_count(self, stats, key, num_successes, num_fails):
+        self.verify_key_in_dict(key, stats, 'stats["%s"]' % (key))
+        success_fail_dict = stats[key]
+        self.assertEqual(success_fail_dict['successes'], num_successes,
+                         'make sure success count')
+        self.assertEqual(success_fail_dict['failures'], num_fails,
+                         'make sure success count')
+
+    def get_stats(self, options=None, log_path=None):
+        """
+            Get the output of the "statistics dump" with optional extra options
+            and return the JSON as a python dictionary.
+        """
+        # If log_path is set, open the path and emit the output of the command
+        # for debugging purposes.
+        if log_path is not None:
+            f = open(log_path, 'w')
+        else:
+            f = None
+        return_obj = lldb.SBCommandReturnObject()
+        command = "statistics dump "
+        if options is not None:
+            command += options
+        if f:
+            f.write('(lldb) %s\n' % (command))
+        self.ci.HandleCommand(command, return_obj, False)
+        metrics_json = return_obj.GetOutput()
+        if f:
+            f.write(metrics_json)
+        return json.loads(metrics_json)
+
+    def test_expressions_frame_var_counts(self):
+        lldbutil.run_to_source_breakpoint(self, "// break here",
+                                          lldb.SBFileSpec("main.c"))
+
+        self.expect("expr patatino", substrs=['27'])
+        stats = self.get_stats()
+        self.verify_success_fail_count(stats, 'expressionEvaluation', 1, 0)
         self.expect("expr doesnt_exist", error=True,
                     substrs=["undeclared identifier 'doesnt_exist'"])
         # Doesn't successfully execute.
@@ -30,17 +96,88 @@ def test(self):
         # Interpret an integer as an array with 3 elements is also a failure.
         self.expect("expr -Z 3 -- 1", error=True,
                     substrs=["expression cannot be used with --element-count"])
-        self.expect("statistics disable")
         # We should have gotten 3 new failures and the previous success.
-        self.expect("statistics dump", substrs=['expr evaluation successes : 1\n',
-                                                'expr evaluation failures : 3\n'])
-
-        # 'frame var' with disabled statistics shouldn't change stats.
-        self.expect("frame var", substrs=['27'])
+        stats = self.get_stats()
+        self.verify_success_fail_count(stats, 'expressionEvaluation', 1, 3)
 
         self.expect("statistics enable")
         # 'frame var' with enabled statistics will change stats.
         self.expect("frame var", substrs=['27'])
-        self.expect("statistics disable")
-        self.expect("statistics dump", substrs=['frame var successes : 1\n',
-                                                'frame var failures : 0\n'])
+        stats = self.get_stats()
+        self.verify_success_fail_count(stats, 'frameVariable', 1, 0)
+
+    def test_default_no_run(self):
+        """Test "statistics dump" without running the target.
+
+        When we don't run the target, we expect to not see any 'firstStopTime'
+        or 'launchOrAttachTime' top level keys that measure the launch or
+        attach of the target.
+
+        Output expected to be something like:
+
+        (lldb) statistics dump
+        {
+          "targetCreateTime": 0.26566899599999999,
+          "expressionEvaluation": {
+            "failures": 0,
+            "successes": 0
+          },
+          "frameVariable": {
+            "failures": 0,
+            "successes": 0
+          },
+        }
+        """
+        target = self.createTestTarget()
+        stats = self.get_stats()
+        keys_exist = [
+            'expressionEvaluation',
+            'frameVariable',
+            'targetCreateTime',
+        ]
+        keys_missing = [
+            'firstStopTime',
+            'launchOrAttachTime'
+        ]
+        self.verify_keys(stats, '"stats"', keys_exist, keys_missing)
+        self.assertGreater(stats['targetCreateTime'], 0.0)
+
+    def test_default_with_run(self):
+        """Test "statistics dump" when running the target to a breakpoint.
+
+        When we run the target, we expect to see 'launchOrAttachTime' and
+        'firstStopTime' top level keys.
+
+        Output expected to be something like:
+
+        (lldb) statistics dump
+        {
+          "firstStopTime": 0.34164492800000001,
+          "launchOrAttachTime": 0.31969605400000001,
+          "targetCreateTime": 0.0040863039999999998
+          "expressionEvaluation": {
+            "failures": 0,
+            "successes": 0
+          },
+          "frameVariable": {
+            "failures": 0,
+            "successes": 0
+          },
+        }
+
+        """
+        target = self.createTestTarget()
+        lldbutil.run_to_source_breakpoint(self, "// break here",
+                                          lldb.SBFileSpec("main.c"))
+        stats = self.get_stats()
+        keys_exist = [
+            'expressionEvaluation',
+            'firstStopTime',
+            'frameVariable',
+            'launchOrAttachTime',
+            'targetCreateTime',
+        ]
+        self.verify_keys(stats, '"stats"', keys_exist, None)
+        self.assertGreater(stats['firstStopTime'], 0.0)
+        self.assertGreater(stats['launchOrAttachTime'], 0.0)
+        self.assertGreater(stats['targetCreateTime'], 0.0)

diff  --git a/lldb/test/API/functionalities/stats_api/TestStatisticsAPI.py b/lldb/test/API/functionalities/stats_api/TestStatisticsAPI.py
index 58ae0fdf7880f..bbbc0739da720 100644
--- a/lldb/test/API/functionalities/stats_api/TestStatisticsAPI.py
+++ b/lldb/test/API/functionalities/stats_api/TestStatisticsAPI.py
@@ -10,6 +10,8 @@
 class TestStatsAPI(TestBase):
     mydir = TestBase.compute_mydir(__file__)
 
+    NO_DEBUG_INFO_TESTCASE = True
+
     def test_stats_api(self):
         self.build()
         exe = self.getBuildArtifact("a.out")
@@ -26,9 +28,18 @@ def test_stats_api(self):
         stats = target.GetStatistics()
         stream = lldb.SBStream()
         res = stats.GetAsJSON(stream)
-        stats_json = sorted(json.loads(stream.GetData()))
-        self.assertEqual(len(stats_json), 4)
-        self.assertIn("Number of expr evaluation failures", stats_json)
-        self.assertIn("Number of expr evaluation successes", stats_json)
-        self.assertIn("Number of frame var failures", stats_json)
-        self.assertIn("Number of frame var successes", stats_json)
+        stats_json = json.loads(stream.GetData())
+        self.assertEqual('expressionEvaluation' in stats_json, True,
+                'Make sure the "expressionEvaluation" key in in target.GetStatistics()')
+        self.assertEqual('frameVariable' in stats_json, True,
+                'Make sure the "frameVariable" key in in target.GetStatistics()')
+        expressionEvaluation = stats_json['expressionEvaluation']
+        self.assertEqual('successes' in expressionEvaluation, True,
+                'Make sure the "successes" key in in "expressionEvaluation" dictionary"')
+        self.assertEqual('failures' in expressionEvaluation, True,
+                'Make sure the "failures" key in in "expressionEvaluation" dictionary"')
+        frameVariable = stats_json['frameVariable']
+        self.assertEqual('successes' in frameVariable, True,
+                'Make sure the "successes" key in in "frameVariable" dictionary"')
+        self.assertEqual('failures' in frameVariable, True,
+                'Make sure the "failures" key in in "frameVariable" dictionary"')


        


More information about the lldb-commits mailing list