[libcxx-commits] [libcxx] 2135bab - [libc++] Save benchmark results in a json file (#119761)

via libcxx-commits libcxx-commits at lists.llvm.org
Fri Dec 13 11:19:38 PST 2024


Author: Louis Dionne
Date: 2024-12-13T14:19:35-05:00
New Revision: 2135babe28b038c99d77f15c39b3f7e498fc6694

URL: https://github.com/llvm/llvm-project/commit/2135babe28b038c99d77f15c39b3f7e498fc6694
DIFF: https://github.com/llvm/llvm-project/commit/2135babe28b038c99d77f15c39b3f7e498fc6694.diff

LOG: [libc++] Save benchmark results in a json file (#119761)

When running a benchmark, also save the benchmark results in a JSON
file. That is cheap to do and useful to compare benchmark results
between different runs.

Added: 
    

Modified: 
    libcxx/utils/libcxx/test/format.py

Removed: 
    


################################################################################
diff  --git a/libcxx/utils/libcxx/test/format.py b/libcxx/utils/libcxx/test/format.py
index f69a7dfedef2d5..59d0fffd378191 100644
--- a/libcxx/utils/libcxx/test/format.py
+++ b/libcxx/utils/libcxx/test/format.py
@@ -348,7 +348,7 @@ def execute(self, test, litConfig):
                 "%dbg(COMPILED WITH) %{cxx} %s %{flags} %{compile_flags} %{benchmark_flags} %{link_flags} -o %t.exe",
             ]
             if "enable-benchmarks=run" in test.config.available_features:
-                steps += ["%dbg(EXECUTED AS) %{exec} %t.exe"]
+                steps += ["%dbg(EXECUTED AS) %{exec} %t.exe --benchmark_out=%T/benchmark-result.json --benchmark_out_format=json"]
             return self._executeShTest(test, litConfig, steps)
         else:
             return lit.Test.Result(


        


More information about the libcxx-commits mailing list