[llvm-commits] [LNT] r160747 - in /lnt/trunk/lnt/server/ui: templates/v4_daily_report.html views.py

Daniel Dunbar daniel at zuster.org
Wed Jul 25 11:23:12 PDT 2012


Author: ddunbar
Date: Wed Jul 25 13:23:12 2012
New Revision: 160747

URL: http://llvm.org/viewvc/llvm-project?rev=160747&view=rev
Log:
[lnt/v0.4] lnt.server.ui/v4_daily_report: Build and dump raw result data table.

  - Still no attempt at presenting data in a human grokable format, but we are
    now querying all the basic data we want to aggregate (not including extra
    data used for comparison runs).

  - Performance is not awesome, but usable considering the volume of data (local
    server takes ~1s per report).

Modified:
    lnt/trunk/lnt/server/ui/templates/v4_daily_report.html
    lnt/trunk/lnt/server/ui/views.py

Modified: lnt/trunk/lnt/server/ui/templates/v4_daily_report.html
URL: http://llvm.org/viewvc/llvm-project/lnt/trunk/lnt/server/ui/templates/v4_daily_report.html?rev=160747&r1=160746&r2=160747&view=diff
==============================================================================
--- lnt/trunk/lnt/server/ui/templates/v4_daily_report.html (original)
+++ lnt/trunk/lnt/server/ui/templates/v4_daily_report.html Wed Jul 25 13:23:12 2012
@@ -50,4 +50,33 @@
 {% endfor %}
 </table>
 
+{# Generate the table showing the raw sample data. #}
+<h3>Result Table</h3>
+<table border="1">
+  <thead>
+    <tr>
+      <th>Test Name</th>
+      <th>Machine Name</th>
+{% for i in range(num_prior_days_to_include)|reverse %}
+      <th>Day - {{i}}</th>
+{% endfor %}
+  </thead>
+{% for test,test_results in zip(reporting_tests, result_table) %}
+  <tr>
+    <td colspan="2"><b>{{test.name}}</b></td>
+    <td colspan="{{num_prior_days_to_include}}"> </td>
+  </tr>
+{%   for machine in reporting_machines %}
+{%   set machine_loop = loop %}
+  <tr>
+    <td> </td>
+    <td>{{machine.name}}</td>
+{%     for day_results in test_results|reverse %}
+    <td>{{day_results[machine_loop.index0]}}</td>
+{%     endfor %}
+  </tr>
+{%   endfor %}
+{% endfor %}
+</table>
+
 {% endblock %}

Modified: lnt/trunk/lnt/server/ui/views.py
URL: http://llvm.org/viewvc/llvm-project/lnt/trunk/lnt/server/ui/views.py?rev=160747&r1=160746&r2=160747&view=diff
==============================================================================
--- lnt/trunk/lnt/server/ui/views.py (original)
+++ lnt/trunk/lnt/server/ui/views.py Wed Jul 25 13:23:12 2012
@@ -1104,9 +1104,9 @@
 
     # Find all the runs that occurred for each day slice.
     prior_runs = [ts.query(ts.Run).\
-                      filter(ts.Run.start_time > prev_day).\
+                      filter(ts.Run.start_time > prior_day).\
                       filter(ts.Run.start_time <= day).all()
-                  for day,prev_day in util.pairs(prior_days)]
+                  for day,prior_day in util.pairs(prior_days)]
 
     # For every machine, we only want to report on the last run order that was
     # reported for that machine for the particular day range.
@@ -1135,20 +1135,87 @@
                          if r.order is machine_order_map[r.machine]]
 
     # Form a list of all relevant runs.
-    relevant_runs = [r
-                     for runs in prior_runs
-                     for r in runs]
+    relevant_runs = sum(prior_runs, [])
 
     # Find the union of all machines reporting in the relevant runs.
     reporting_machines = list(set(r.machine for r in relevant_runs))
     reporting_machines.sort(key = lambda m: m.name)
 
+    # We aspire to present a "lossless" report, in that we don't ever hide any
+    # possible change due to aggregation. In addition, we want to make it easy
+    # to see the relation of results across all the reporting machines. In
+    # particular:
+    #
+    #   (a) When a test starts failing or passing on one machine, it should be
+    #       easy to see how that test behaved on other machines. This makes it
+    #       easy to identify the scope of the change.
+    #
+    #   (b) When a performance change occurs, it should be easy to see the
+    #       performance of that test on other machines. This makes it easy to
+    #       see the scope of the change and to potentially apply human
+    #       discretion in determining whether or not a particular result is
+    #       worth considering (as opposed to noise).
+    #
+    # The idea is as follows, for each (machine, test, primary_field), classify
+    # the result into one of REGRESSED, IMPROVED, UNCHANGED_FAIL, ADDED,
+    # REMOVED, PERFORMANCE_REGRESSED, PERFORMANCE_IMPROVED.
+    #
+    # For now, we then just aggregate by test and present the results as
+    # is. This is lossless, but not nearly as nice to read as the old style
+    # per-machine reports. In the future we will want to find a way to combine
+    # the per-machine report style of presenting results aggregated by the kind
+    # of status change, while still managing to present the overview across
+    # machines.
+
+    # Batch load all of the samples reported by all these runs.
+    columns = [ts.Sample.run_id,
+               ts.Sample.test_id]
+    columns.extend(f.column
+                   for f in ts.sample_fields)
+    samples = ts.query(*columns).\
+        filter(ts.Sample.run_id.in_(
+            r.id for r in relevant_runs)).all()
+
+    # Find the union of tests reported in the relevant runs.
+    #
+    # FIXME: This is not particularly efficient, should we just use all tests in
+    # the database?
+    reporting_tests = ts.query(ts.Test).\
+        filter(ts.Test.id.in_(set(s[1] for s in samples))).\
+        order_by(ts.Test.name).all()
+
+    # Aggregate all of the samples by (run_id, test_id).
+    sample_map = util.multidict()
+    for s in samples:
+        sample_map[(s[0], s[1])] = s[2:]
+
+    # Build the result table:
+    #   result_table[test_index][day_index][machine_index] = {samples}
+    result_table = []
+    for test in reporting_tests:
+        key = test
+        test_results = []
+        for day_runs in prior_runs:
+            day_results = []
+            for machine in reporting_machines:
+                # Collect all the results for this machine.
+                results = [s
+                           for run in day_runs
+                           if run.machine is machine
+                           for s in sample_map.get((run.id, test.id), ())]
+                day_results.append(results)
+            test_results.append(day_results)
+        result_table.append(test_results)
+
+    # FIXME: Now compute ComparisonResult objects for each (test, machine, day).
+
     return render_template(
         "v4_daily_report.html", ts=ts, day_start_offset=day_start_offset,
         num_prior_days_to_include=num_prior_days_to_include,
-        reporting_machines=reporting_machines,
+        reporting_machines=reporting_machines, reporting_tests=reporting_tests,
         prior_days=prior_days, next_day=next_day,
-        prior_days_machine_order_map=prior_days_machine_order_map)
+        prior_days_machine_order_map=prior_days_machine_order_map,
+        result_table=result_table)
 
 ###
 # Cross Test-Suite V4 Views





More information about the llvm-commits mailing list