[libc-commits] [libc] 880115e - [libc] Reorganize and clarify a few points around benchmarking
Eric Christopher via libc-commits
libc-commits at lists.llvm.org
Wed May 6 13:55:58 PDT 2020
Author: Eric Christopher
Date: 2020-05-06T13:54:13-07:00
New Revision: 880115e65ecd7a8838588faf6dfeef5c37e7f586
URL: https://github.com/llvm/llvm-project/commit/880115e65ecd7a8838588faf6dfeef5c37e7f586
DIFF: https://github.com/llvm/llvm-project/commit/880115e65ecd7a8838588faf6dfeef5c37e7f586.diff
LOG: [libc] Reorganize and clarify a few points around benchmarking
A few documentation clarifications and moving one part of the
docs around to be closer to the first mention of display so that
it's easier to spot based on some user feedback.
Differential Revision: https://reviews.llvm.org/D79443
Added:
Modified:
libc/utils/benchmarks/README.md
Removed:
################################################################################
diff --git a/libc/utils/benchmarks/README.md b/libc/utils/benchmarks/README.md
index f0046cd94442..fdd0223196f2 100644
--- a/libc/utils/benchmarks/README.md
+++ b/libc/utils/benchmarks/README.md
@@ -18,6 +18,7 @@ Then make sure to have `matplotlib`, `scipy` and `numpy` setup correctly:
apt-get install python3-pip
pip3 install matplotlib scipy numpy
```
+You may need `python3-gtk` or similar package for displaying benchmark results.
To get good reproducibility it is important to make sure that the system runs in
`performance` mode. This is achieved by running:
@@ -38,6 +39,26 @@ cmake -B/tmp/build -Sllvm -DLLVM_ENABLE_PROJECTS=libc -DCMAKE_BUILD_TYPE=Release
make -C /tmp/build -j display-libc-memcpy-benchmark-small
```
+The display target will attempt to open a window on the machine where you're
+running the benchmark. If this may not work for you then you may want `render`
+or `run` instead as detailed below.
+
+## Benchmarking targets
+
+The benchmarking process occurs in two steps:
+
+1. Benchmark the functions and produce a `json` file
+2. Display (or renders) the `json` file
+
+Targets are of the form `<action>-libc-<function>-benchmark-<configuration>`
+
+ - `action` is one of :
+ - `run`, runs the benchmark and writes the `json` file
+ - `display`, displays the graph on screen
+ - `render`, renders the graph on disk as a `png` file
+ - `function` is one of : `memcpy`, `memcmp`, `memset`
+ - `configuration` is one of : `small`, `big`
+
## Benchmarking regimes
Using a profiler to observe size distributions for calls into libc functions, it
@@ -62,22 +83,6 @@ Benchmarking configurations come in two flavors:
_<sup>1</sup> - The size refers to the size of the buffers to compare and not
the number of bytes until the first
diff erence._
-## Benchmarking targets
-
-The benchmarking process occurs in two steps:
-
-1. Benchmark the functions and produce a `json` file
-2. Display (or renders) the `json` file
-
-Targets are of the form `<action>-libc-<function>-benchmark-<configuration>`
-
- - `action` is one of :
- - `run`, runs the benchmark and writes the `json` file
- - `display`, displays the graph on screen
- - `render`, renders the graph on disk as a `png` file
- - `function` is one of : `memcpy`, `memcmp`, `memset`
- - `configuration` is one of : `small`, `big`
-
## Superposing curves
It is possible to **merge** several `json` files into a single graph. This is
More information about the libc-commits
mailing list