[PATCH] D43314: [lit] - Allow 1 test to report multiple micro-test results to provide support for microbenchmarks.

Brian Homerding via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Feb 14 14:11:37 PST 2018


homerdin created this revision.
homerdin added reviewers: MatzeB, hfinkel, rengolin.
Herald added a subscriber: delcypher.

These changes are to allow to a Result object to have nested Result objects in order to support microbenchmarks.  Currently lit is restricted to reporting one result object for one test, this change provides support for when a test that wants to report individual timings for individual kernels.

This is revision is result of the discussions in https://reviews.llvm.org/D32272#794759, https://reviews.llvm.org/D37421#f8003b27 and https://reviews.llvm.org/D38496 that I have seen.  It is a separation of the changes purposed in https://reviews.llvm.org/D40077.

With this change I will be adding the LCALS (Livermore Compiler Analysis Loop Suite) collection of loop kernels to the llvm test suite using the google benchmark library (link to revision to come soon).

Previously microbenchmarks had been handled by using macros to section groups of microbenchmarks together and build many executables while still getting a grouped timing (MultiSource/TSVC).  Recently the google benchmark library was added to the test suite and utilized with a litsupport plugin.  However the limitation of 1 test 1 result limited its use to passing a runtime option to run only 1 microbenchmark with several hand written tests (MicroBenchmarks/XRay). This runs the same executable many times with different hand written tests.  I will update the litsupport plugin to utilize the new functionality (link to revision to come soon).

These changes allow lit to report micro test results if desired in order to get many precise timing results from 1 run of 1 test executable.

Example Output from LCALS:

Terminal:

  PASS: test-suite :: MicroBenchmarks/LCALS/SubsetBRawLoops/lcalsBRaw.test (1 of 1)
  ********** TEST 'test-suite :: MicroBenchmarks/LCALS/SubsetBRawLoops/lcalsBRaw.test' RESULTS **********
  MicroBenchmarks: 12 
  compile_time: 4.9991 
  hash: "256a4aeeb9d90efdbb4ac9fe7ab8255c" 
  link_time: 0.0342 
  **********
  *** MICRO-TEST 'BM_IF_QUAD_RAW/171 RESULTS ***
      exec_time:  3.3051 
  *** MICRO-TEST 'BM_IF_QUAD_RAW/44217 RESULTS ***
      exec_time:  838.9010 
  *** MICRO-TEST 'BM_IF_QUAD_RAW/5001 RESULTS ***
      exec_time:  94.6292 
  *** MICRO-TEST 'BM_INIT3_RAW/171 RESULTS ***
      exec_time:  0.1789 
  *** MICRO-TEST 'BM_INIT3_RAW/44217 RESULTS ***
      exec_time:  163.0100 
  *** MICRO-TEST 'BM_INIT3_RAW/5001 RESULTS ***
      exec_time:  16.5080 
  *** MICRO-TEST 'BM_MULADDSUB_RAW/171 RESULTS ***
      exec_time:  0.3479 
  *** MICRO-TEST 'BM_MULADDSUB_RAW/44217 RESULTS ***
      exec_time:  179.4560 
  *** MICRO-TEST 'BM_MULADDSUB_RAW/5001 RESULTS ***
      exec_time:  16.9089 
  *** MICRO-TEST 'BM_TRAP_INT_RAW/171 RESULTS ***
      exec_time:  2.5511 
  *** MICRO-TEST 'BM_TRAP_INT_RAW/44217 RESULTS ***
      exec_time:  654.8120 
  *** MICRO-TEST 'BM_TRAP_INT_RAW/5001 RESULTS ***
      exec_time:  73.7429 
  **********

Partial JSON output:

  "tests": [
    {
      "code": "PASS",
      "elapsed": null,
      "metrics": {
        "exec_time": 654.812
      },
      "name": "test-suite :: MicroBenchmarks/LCALS/SubsetBRawLoops/lcalsBRaw/BM_TRAP_INT_RAW/44217.test",
      "output": ""
    },
    {
      "code": "PASS",
      "elapsed": null,
      "metrics": {
        "exec_time": 3.30509
      },
      "name": "test-suite :: MicroBenchmarks/LCALS/SubsetBRawLoops/lcalsBRaw/BM_IF_QUAD_RAW/171.test",
      "output": ""
    },
    {
      "code": "PASS",
      "elapsed": null,
      "metrics": {
        "exec_time": 179.456
      },
      "name": "test-suite :: MicroBenchmarks/LCALS/SubsetBRawLoops/lcalsBRaw/BM_MULADDSUB_RAW/44217.test",
      "output": ""
    },
    {
      "code": "PASS",
      "elapsed": null,
      "metrics": {
        "exec_time": 163.01
      },
      "name": "test-suite :: MicroBenchmarks/LCALS/SubsetBRawLoops/lcalsBRaw/BM_INIT3_RAW/44217.test",
      "output": ""
    },
    {
      "code": "PASS",
      "elapsed": null,
      "metrics": {
        "exec_time": 16.9089
      },
      "name": "test-suite :: MicroBenchmarks/LCALS/SubsetBRawLoops/lcalsBRaw/BM_MULADDSUB_RAW/5001.test",
      "output": ""
    },
    {
      "code": "PASS",
      "elapsed": null,
      "metrics": {
        "exec_time": 73.7429
      },
      "name": "test-suite :: MicroBenchmarks/LCALS/SubsetBRawLoops/lcalsBRaw/BM_TRAP_INT_RAW/5001.test",
      "output": ""
    },
    {
      "code": "PASS",
      "elapsed": null,
      "metrics": {
        "exec_time": 94.6292
      },
      "name": "test-suite :: MicroBenchmarks/LCALS/SubsetBRawLoops/lcalsBRaw/BM_IF_QUAD_RAW/5001.test",
      "output": ""
    },
    {
      "code": "PASS",
      "elapsed": null,
      "metrics": {
        "exec_time": 838.901
      },
      "name": "test-suite :: MicroBenchmarks/LCALS/SubsetBRawLoops/lcalsBRaw/BM_IF_QUAD_RAW/44217.test",
      "output": ""
    },
    {
      "code": "PASS",
      "elapsed": null,
      "metrics": {
        "exec_time": 0.347863
      },
      "name": "test-suite :: MicroBenchmarks/LCALS/SubsetBRawLoops/lcalsBRaw/BM_MULADDSUB_RAW/171.test",
      "output": ""
    },


https://reviews.llvm.org/D43314

Files:
  utils/lit/lit/Test.py
  utils/lit/lit/main.py


Index: utils/lit/lit/main.py
===================================================================
--- utils/lit/lit/main.py
+++ utils/lit/lit/main.py
@@ -81,6 +81,17 @@
                 print('%s: %s ' % (metric_name, value.format()))
             print("*" * 10)
 
+        # Report micro-tests, if present
+        if test.result.microResults:
+            items = sorted(test.result.microResults.items())
+            for micro_test_name, micro_test in items:
+                print("%s MICRO-TEST '%s RESULTS %s" %
+                         ('*'*3, micro_test_name, '*'*3))
+   
+                for metric_name, value in micro_test.metrics.items():
+                    print('    %s:  %s ' % (metric_name, value.format()))
+            print("*" * 10)
+
         # Ensure the output is flushed.
         sys.stdout.flush()
 
@@ -113,6 +124,23 @@
             for key, value in test.result.metrics.items():
                 metrics_data[key] = value.todata()
 
+        # Report micro-tests separately, if present
+        if test.result.microResults:
+            for key, micro_test in test.result.microResults.items():
+                micro_full_name = test.getFullName()[:-5] + '/' + key + ".test"
+
+                micro_test_data = {
+                    'name' : micro_full_name,
+                    'code' : micro_test.code.name,
+                    'output' : micro_test.output,
+                    'elapsed' : micro_test.elapsed }
+                if micro_test.metrics:
+                    micro_test_data['metrics'] = micro_metrics_data = {}
+                    for key, value in micro_test.metrics.items():
+                        micro_metrics_data[key] = value.todata()
+
+                tests_data.append(micro_test_data)
+
         tests_data.append(test_data)
 
     # Write the output.
Index: utils/lit/lit/Test.py
===================================================================
--- utils/lit/lit/Test.py
+++ utils/lit/lit/Test.py
@@ -135,6 +135,8 @@
         self.elapsed = elapsed
         # The metrics reported by this test.
         self.metrics = {}
+        # The micro-test results reported by this test.
+        self.microResults = {}
 
     def addMetric(self, name, value):
         """
@@ -153,6 +155,24 @@
             raise TypeError("unexpected metric value: %r" % (value,))
         self.metrics[name] = value
 
+    def addMicroResult(self, name, microResult):
+        """
+        addMicroResult(microResult)
+
+        Attach a micro-test result to the test result, with the given name and
+        result.  It is an error to attempt to attach a micro-test with the 
+        same name multiple times.
+
+        Each micro-test result must be an instance of the Result class.
+        """
+        if name in self.microResults:
+            raise ValueError("Result already includes microResult for %r" % (
+                   name,))
+        if not isinstance(microResult, Result):
+            raise TypeError("unexpected MicroResult value %r" % (microResult,))
+        self.microResults[name] = microResult
+
+
 # Test classes.
 
 class TestSuite:


-------------- next part --------------
A non-text attachment was scrubbed...
Name: D43314.134303.patch
Type: text/x-patch
Size: 3111 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20180214/0b6a66ab/attachment.bin>


More information about the llvm-commits mailing list