[LNT] r266325 - Some more docs for the LNT json file format.
Kristof Beyls via llvm-commits
llvm-commits at lists.llvm.org
Thu Apr 14 08:34:51 PDT 2016
Author: kbeyls
Date: Thu Apr 14 10:34:49 2016
New Revision: 266325
URL: http://llvm.org/viewvc/llvm-project?rev=266325&view=rev
Log:
Some more docs for the LNT json file format.
A few people recently have asked me how to be able to quickly test out LNT on
their own data. For most of them, the quickest way to do so is to create a
script to translate their data format to the JSON file to be submitted to the
LNT server. The quickest way to do so is probably just seeing an example and a
simple description of what the json file structure is, rather than use the APIs
in lnt.testing.
After the evaluation of the LNT server and web-ui shows it's a useful tool, the
recommended way to produce lnt json files remains using the lnt.testing api.
Differential Revision: http://reviews.llvm.org/D19053
Added:
lnt/trunk/docs/importing_data.rst
Modified:
lnt/trunk/docs/concepts.rst
lnt/trunk/docs/contents.rst
Modified: lnt/trunk/docs/concepts.rst
URL: http://llvm.org/viewvc/llvm-project/lnt/trunk/docs/concepts.rst?rev=266325&r1=266324&r2=266325&view=diff
==============================================================================
--- lnt/trunk/docs/concepts.rst (original)
+++ lnt/trunk/docs/concepts.rst Thu Apr 14 10:34:49 2016
@@ -61,15 +61,11 @@ and code size. Other test suites can be
don't mactch your needs.
Any program can submit results data to LNT, and specify any test suite. The
-data format is a simple JSON file, and that file needs to be HTTP POSTed to the
-submitRun URL.
+data format is a simple JSON file, and that file needs to be submitted to the
+server using either the lnt import or submit commands, see :ref:`tools`, or
+HTTP POSTed to the submitRun URL.
The most common program to submit data to LNT is the LNT client application
itself. The ``lnt runtest nt`` command can run the LLVM test suite, and submit
data under the NTS test suite. Likewise the ``lnt runtest compile`` command
can run a set of compile time benchmarks and submit to the Compile test suite.
-
-Given how simple it is to make your own results and send them to LNT,
-it is common to not use the LNT client application at all, and just have a
-custom script run your tests and submit the data to the LNT server. Details
-on how to do this are in :mod:`lnt.testing`
Modified: lnt/trunk/docs/contents.rst
URL: http://llvm.org/viewvc/llvm-project/lnt/trunk/docs/contents.rst?rev=266325&r1=266324&r2=266325&view=diff
==============================================================================
--- lnt/trunk/docs/contents.rst (original)
+++ lnt/trunk/docs/contents.rst Thu Apr 14 10:34:49 2016
@@ -15,9 +15,11 @@ Contents
tests
changes
-
+
concepts
+ importing_data
+
developer_guide
todo
Added: lnt/trunk/docs/importing_data.rst
URL: http://llvm.org/viewvc/llvm-project/lnt/trunk/docs/importing_data.rst?rev=266325&view=auto
==============================================================================
--- lnt/trunk/docs/importing_data.rst (added)
+++ lnt/trunk/docs/importing_data.rst Thu Apr 14 10:34:49 2016
@@ -0,0 +1,91 @@
+.. _importing_data:
+
+Importing Data from Other Test Systems
+======================================
+
+First, make sure you've understood the underlying :ref:`concepts` used by LNT.
+
+Given how simple it is to make your own results and send them to LNT,
+it is common to not use the LNT client application at all, and just have a
+custom script run your tests and submit the data to the LNT server. Details
+on how to do this are in :mod:`lnt.testing`
+
+If for some reason you prefer to generate the json file more directly, the
+current format looks like below. It remains recommended to use the APIs in
+:mod:`lnt.testing` to be better protected against future changes to the json
+format::
+
+ {
+ "Machine": {
+ "Info": {
+ (_String_: _String_)* // optional extra info about the machine.
+ },
+ "Name": _String_ // machine name, mandatory
+ },
+ "Run": {
+ "End Time": "%Y-%m-%d %H:%M:%S", // mandatory
+ "Start Time": "%Y-%m-%d %H:%M:%S", // mandatory
+ "Info": {
+ "run_order": _String_, // mandatory
+ "tag": "nts" // mandatory
+ (_String_: _String_)* // optional extra info about the run.
+ }
+ },
+ "Tests": [
+ {
+ "Data": [ (float+) ],
+ "Info": {},
+ "Name": "nts._ProgramName_._metric_"
+ }+
+ ]
+ }
+
+
+A concrete small example is::
+
+ {
+ "Machine": {
+ "Info": {
+ },
+ "Name": "LNT-AArch64-A53-O3__clang_DEV__aarch64"
+ },
+ "Run": {
+ "End Time": "2016-04-07 14:25:52",
+ "Start Time": "2016-04-07 09:33:48",
+ "Info": {
+ "run_order": "265649",
+ "tag": "nts"
+ }
+ },
+ "Tests": [
+ {
+ "Data": [
+ 0.1056,
+ 0.1055
+ ],
+ "Info": {},
+ "Name": "nts.suite1/program1.exec"
+ },
+ {
+ "Data": [
+ 0.2136
+ ],
+ "Info": {},
+ "Name": "nts.suite2/program1.exec"
+ }
+ ]
+ }
+
+Make sure that:
+ * The Run.Info.tag value is "nts".
+ * The test names always start with "nts.".
+ * The extension of the test name indicate what kind of data is recorded.
+ Currently accepted extensions in the NTS database are:
+
+ * ".exec": represents execution time - a lower number is better.
+ * ".score": represent a benchmark score - a higher number is better.
+ * ".hash": represents a hash of the binary program being executed. This is
+ used to detect if between compiler versions, the generated code has
+ changed.
+ * ".compile": represents the compile time of the program.
+ All of these metrics are optional.
More information about the llvm-commits
mailing list