[llvm-commits] [LNT] r154932 - in /lnt/trunk/lnt/server: reporting/summaryreport.py ui/templates/v4_summary_report.html ui/views.py
Daniel Dunbar
daniel at zuster.org
Tue Apr 17 10:50:04 PDT 2012
Author: ddunbar
Date: Tue Apr 17 12:50:04 2012
New Revision: 154932
URL: http://llvm.org/viewvc/llvm-project?rev=154932&view=rev
Log:
lnt.server.reporting: Add a new SummaryReport object.
This is pretty ad hoc at the moment, but it provides a way to get an overall
summary of how we are doing on performance across releases. It handles:
- Aggregate across multiple machines and runs.
- Aggregate across both the "nightly test" and "compile time" test suites.
- Present results in a single relatively simple web report.
o Results get broken down into 4 main charts:
{ Compile Time, Execution Time } x { Debug Builds, Release Builds }
o Results also include pie chart showing the breakdown of single file times
(what parts of the compiler time is being spent in).
The aggregation process is currently:
1. "Nightly test" results are aggregated into an ad hoc metric which we label "Lmark":
- Runs are aggregated into debug vs release, and other optimization flags are
ignored.
- Runs are aggregated into 'x86' and 'ARM'.
- Compile time for all benchmarks is summed. This means longer building
benchmarks are weighted more heavily currently.
- Execution time for a limited set of benchmarks is summed. We limit
ourselves to only consider SPEC and a handful of other tests which we
non-scientifically believe to be interesting benchmarks. Again, longer
running benchmarks are weighted more heavily by this.
2. For the "compile time" suite, we:
- Only pay attention to wall time results for now. Reporting both user time
and wall time is too much data to concisely present. I do plan to add
memory usage.
We may want to revisit this, because actually the user time and wall time
are quite interesting to both see particularly for full build timings. -
Aggregate full builds across all job sizes (by the normalized mean to
weight each job size equally). - Aggregate all single file tests
regardless of input (by mean).
This is close to being useful, but there are lots of areas that need additional
work:
- Report definition currently has to be defined by hand (inside the instance),
no UI for it.
- No way to investigate results further (aggregation is opaque):
o It would be nice if there was a way to "dive in" on particular results and
crack open the aggregation function to understand the values behind a
particular plot.
- Performance of the report generation sucks.
- From a design point of view, the code relies heavily on implicit assumptions
about the test suites. In an ideal world, we would find a way to make the
information needed to produce this report all data driven.
- I would like to investigate (or provide explicit options for) some of the
choices of aggregation functions, and how changing them impacts the results.
Added:
lnt/trunk/lnt/server/reporting/summaryreport.py
lnt/trunk/lnt/server/ui/templates/v4_summary_report.html
Modified:
lnt/trunk/lnt/server/ui/views.py
Added: lnt/trunk/lnt/server/reporting/summaryreport.py
URL: http://llvm.org/viewvc/llvm-project/lnt/trunk/lnt/server/reporting/summaryreport.py?rev=154932&view=auto
==============================================================================
--- lnt/trunk/lnt/server/reporting/summaryreport.py (added)
+++ lnt/trunk/lnt/server/reporting/summaryreport.py Tue Apr 17 12:50:04 2012
@@ -0,0 +1,475 @@
+import re
+
+import lnt.testing
+import lnt.util.stats
+
+###
+# Aggregation Function
+
+class Aggregation(object):
+ def __init__(self):
+ self.is_initialized = False
+
+ def __repr__(self):
+ return repr(self.getvalue())
+
+ def getvalue(self):
+ abstract
+
+ def append(self, values):
+ if not self.is_initialized:
+ self.is_initialized = True
+ self._initialize(values)
+ self._append(values)
+
+class Sum(Aggregation):
+ def __init__(self):
+ Aggregation.__init__(self)
+ self.sum = None
+
+ def getvalue(self):
+ return self.sum
+
+ def _initialize(self, values):
+ self.sum = [0.] * len(values)
+
+ def _append(self, values):
+ for i,value in enumerate(values):
+ self.sum[i] += value
+
+class Mean(Aggregation):
+ def __init__(self):
+ Aggregation.__init__(self)
+ self.count = 0
+ self.sum = None
+
+ def getvalue(self):
+ return [value/self.count for value in self.sum]
+
+ def _initialize(self, values):
+ self.sum = [0.] * len(values)
+
+ def _append(self, values):
+ for i,value in enumerate(values):
+ self.sum[i] += value
+ self.count += 1
+
+class GeometricMean(Aggregation):
+ def __init__(self):
+ Aggregation.__init__(self)
+ self.count = 0
+ self.product = None
+
+ def getvalue(self):
+ return [value ** 1.0/self.count for value in self.product]
+
+ def __repr__(self):
+ return repr(self.geometric_mean)
+
+ def _initialize(self, values):
+ self.product = [1.] * len(values)
+
+ def _append(self, values):
+ for i,value in enumerate(values):
+ self.product[i] *= value
+ self.count += 1
+
+class NormalizedMean(Mean):
+ def _append(self, values):
+ baseline = values[0]
+ Mean._append(self, [v/baseline
+ for v in values])
+
+###
+
+class SummaryReport(object):
+ def __init__(self, db, report_orders, report_machine_patterns,
+ machines_to_merge):
+ self.db = db
+ self.testsuites = list(db.testsuite.values())
+ self.report_orders = list(report_orders)
+ self.report_machine_patterns = list(report_machine_patterns)
+ self.report_machine_rexes = [
+ re.compile(pattern)
+ for pattern in self.report_machine_patterns]
+ self.machines_to_merge = dict(machines_to_merge)
+
+ self.data_table = None
+ self.requested_machine_ids = None
+ self.requested_machines = None
+ self.runs_at_index = None
+
+ self.warnings = []
+
+ def build(self):
+ # Build a per-testsuite list of the machines that match the specified
+ # patterns.
+ def should_be_in_report(machine):
+ for rex in self.report_machine_rexes:
+ if rex.match(machine.name):
+ return True
+ self.requested_machines = dict((ts, filter(should_be_in_report,
+ ts.query(ts.Machine).all()))
+ for ts in self.testsuites)
+ self.requested_machine_ids = dict(
+ (ts, [m.id for m in machines])
+ for ts,machines in self.requested_machines.items())
+
+ # First, collect all the runs to summarize on, for each index in the
+ # report orders.
+ self.runs_at_index = []
+ for orders in self.report_orders:
+ # For each test suite...
+ runs = []
+ for ts in self.testsuites:
+ # Find all the orders that match.
+ result = ts.query(ts.Order.id).\
+ filter(ts.Order.llvm_project_revision.in_(
+ orders)).all()
+ ts_order_ids = [id for id, in result]
+
+ # Find all the runs that matchs those orders.
+ if not ts_order_ids:
+ ts_runs = []
+ else:
+ ts_runs = ts.query(ts.Run).\
+ filter(ts.Run.order_id.in_(ts_order_ids)).\
+ filter(ts.Run.machine_id.in_(
+ self.requested_machine_ids[ts])).all()
+
+ if not ts_runs:
+ self.warnings.append(
+ 'no runs for test suite %r in orders %r' % (
+ ts.name, orders))
+
+ runs.append((ts_runs, ts_order_ids))
+ self.runs_at_index.append(runs)
+
+ # Load the tests for each testsuite.
+ self.tests = dict((ts, dict((test.id, test)
+ for test in ts.query(ts.Test)))
+ for ts in self.testsuites)
+
+ # Compute the base table for aggregation.
+ #
+ # The table is indexed by a test name and test features, which are
+ # either extracted from the test name or from the test run (depending on
+ # the suite).
+ #
+ # Each value in the table contains a array with one item for each
+ # report_order entry, which contains all of the samples for that entry..
+ #
+ # The table keys are tuples of:
+ # (<test name>,
+ # <metric>, # Value is either 'Compile Time' or 'Execution Time'.
+ # <arch>,
+ # <build mode>, # Value is either 'Debug' or 'Release'.
+ # <machine id>)
+
+ self.data_table = {}
+ self._build_data_table()
+
+ # Compute indexed data table by applying the indexing functions.
+ self._build_indexed_data_table()
+
+ # Normalize across all machines.
+ self._build_normalized_data_table()
+
+ # Build final organized data tables.
+ self._build_final_data_tables()
+
+ def _build_data_table(self):
+ def get_nts_datapoints_for_sample(ts, run, sample):
+ # Convert the machine ID.
+ machine_id = self.machines_to_merge.get(run.machine_id,
+ run.machine_id)
+
+ # Get the test.
+ test = ts_tests[sample.test_id]
+
+ # The test name for a sample in the NTS suite is just the name of
+ # the sample test.
+ test_name = test.name
+
+ # The arch and build mode are derived from the run flags.
+ parameters = run.parameters
+ arch = parameters['ARCH']
+ if '86' in arch:
+ arch = 'x86'
+
+ if parameters['OPTFLAGS'] == '-O0':
+ build_mode = 'Debug'
+ else:
+ build_mode = 'Release'
+
+ # Return a datapoint for each passing field.
+ for field in ts.Sample.get_primary_fields():
+ # Ignore failing samples.
+ sf = field.status_field
+ if sf and sample.get_field(sf) == lnt.testing.FAIL:
+ continue
+
+ # Ignore missing samples.
+ value = sample.get_field(field)
+ if value is None:
+ continue
+
+ # Otherwise, return a datapoint.
+ if field.name == 'compile_time':
+ metric = 'Compile Time'
+ else:
+ assert field.name == 'execution_time'
+ metric = 'Execution Time'
+ yield ((test_name, metric, arch, build_mode, machine_id),
+ value)
+
+ def get_compile_datapoints_for_sample(ts, run, sample):
+ # Convert the machine ID.
+ machine_id = self.machines_to_merge.get(run.machine_id,
+ run.machine_id)
+
+ # Get the test.
+ test = ts_tests[sample.test_id]
+
+ # Extract the compile flags from the test name.
+ base_name,flags = test.name.split('(')
+ assert flags[-1] == ')'
+ other_flags = []
+ build_mode = None
+ for flag in flags[:-1].split(','):
+ # If this is an optimization flag, derive the build mode from
+ # it.
+ if flag.startswith('-O'):
+ if '-O0' in flag:
+ build_mode = 'Debug'
+ else:
+ build_mode = 'Release'
+ continue
+
+ # If this is a 'config' flag, derive the build mode from it.
+ if flag.startswith('config='):
+ if flag == "config='Debug'":
+ build_mode = 'Debug'
+ else:
+ assert flag == "config='Release'"
+ build_mode = 'Release'
+ continue
+
+ # Otherwise, treat the flag as part of the test name.
+ other_flags.append(flag)
+
+ # Form the test name prefix from the remaining flags.
+ test_name_prefix = '%s(%s)' % (base_name, ','.join(other_flags))
+
+ # Extract the arch from the run info (and normalize).
+ parameters = run.parameters
+ arch = parameters['cc_target'].split('-')[0]
+ if arch.startswith('arm'):
+ arch = 'ARM'
+ elif '86' in arch:
+ arch = 'x86'
+
+ # The metric is fixed.
+ metric = 'Compile Time'
+
+ # Report the user and wall time.
+ for field in ts.Sample.get_primary_fields():
+ if field.name not in ('user_time', 'wall_time'):
+ continue
+
+ # Ignore failing samples.
+ sf = field.status_field
+ if sf and sample.get_field(sf) == lnt.testing.FAIL:
+ continue
+
+ # Ignore missing samples.
+ value = sample.get_field(field)
+ if value is None:
+ continue
+
+ # Otherwise, return a datapoint.
+ yield (('%s.%s' % (test_name_prefix, field.name), metric, arch,
+ build_mode, machine_id),
+ sample.get_field(field))
+
+ def get_datapoints_for_sample(ts, run, sample):
+ # The exact datapoints in each sample depend on the testsuite
+ if ts.name == 'nts':
+ return get_nts_datapoints_for_sample(ts, run, sample)
+ else:
+ assert ts.name == 'compile'
+ return get_compile_datapoints_for_sample(ts, run, sample)
+
+ # For each column...
+ for index,runs in enumerate(self.runs_at_index):
+ # For each test suite and run list...
+ for ts,(ts_runs,_) in zip(self.testsuites, runs):
+ ts_tests = self.tests[ts]
+
+ # For each run...
+ for run in ts_runs:
+ # Load all the samples for this run.
+ samples = ts.query(ts.Sample).filter(
+ ts.Sample.run_id == run.id)
+ for sample in samples:
+ datapoints = list()
+ for key,value in \
+ get_datapoints_for_sample(ts, run, sample):
+ items = self.data_table.get(key)
+ if items is None:
+ items = [[]
+ for _ in self.report_orders]
+ self.data_table[key] = items
+ items[index].append(value)
+
+ def _build_indexed_data_table(self):
+ def is_in_execution_time_filter(name):
+ for key in ("SPEC", "ClamAV", "lencod", "minisat", "SIBSim4",
+ "SPASS", "sqlite3", "viterbi", "Bullet"):
+ if key in name:
+ return True
+
+ def compute_index_name(key):
+ test_name,metric,arch,build_mode,machine_id = key
+
+ # If this is a nightly test..
+ if test_name.startswith('SingleSource/') or \
+ test_name.startswith('MultiSource/') or \
+ test_name.startswith('External/'):
+ # If this is a compile time test, aggregate all values into a
+ # cumulative compile time.
+ if metric == 'Compile Time':
+ return ('Lmark', metric, build_mode, arch, machine_id), Sum
+
+ # Otherwise, this is an execution time. Index the cumulative
+ # result of a limited set of benchmarks.
+ assert metric == 'Execution Time'
+ if is_in_execution_time_filter(test_name):
+ return ('Lmark', metric, build_mode, arch, machine_id), Sum
+
+ # Otherwise, ignore the test.
+ return
+
+ # Otherwise, we have a compile time suite test.
+
+ # Ignore user time results for now.
+ if not test_name.endswith('.wall_time'):
+ return
+
+ # Index full builds across all job sizes.
+ if test_name.startswith('build/'):
+ project_name,subtest_name = re.match(
+ r'build/(.*)\(j=[0-9]+\)\.(.*)', str(test_name)).groups()
+ return (('Full Build (%s)' % (project_name,),
+ metric, build_mode, arch, machine_id),
+ NormalizedMean)
+
+ # Index single file tests across all inputs.
+ if test_name.startswith('compile/'):
+ file_name,stage_name,subtest_name = re.match(
+ r'compile/(.*)/(.*)/\(\)\.(.*)', str(test_name)).groups()
+ return (('Single File (%s)' % (stage_name,),
+ metric, build_mode, arch, machine_id),
+ Mean)
+
+ # Index PCH generation tests by input.
+ if test_name.startswith('pch-gen/'):
+ file_name,subtest_name = re.match(
+ r'pch-gen/(.*)/\(\)\.(.*)', str(test_name)).groups()
+ return (('PCH Generation (%s)' % (file_name,),
+ metric, build_mode, arch, machine_id),
+ Mean)
+
+ # Otherwise, ignore the test.
+ return
+
+ def is_missing_samples(values):
+ for samples in values:
+ if not samples:
+ return True
+
+ self.indexed_data_table = {}
+ for key,values in self.data_table.items():
+ # Ignore any test which is missing some data.
+ if is_missing_samples(values):
+ self.warnings.append("missing values for %r" % (key,))
+ continue
+
+ # Select the median values.
+ medians = [lnt.util.stats.median(samples)
+ for samples in values]
+
+ # Compute the index name, and ignore unused tests.
+ result = compute_index_name(key)
+ if result is None:
+ continue
+
+ index_name,index_class = result
+ item = self.indexed_data_table.get(index_name)
+ if item is None:
+ self.indexed_data_table[index_name] = item = index_class()
+ item.append(medians)
+
+ def _build_normalized_data_table(self):
+ self.normalized_data_table = {}
+ for key,indexed_value in self.indexed_data_table.items():
+ test_name, metric, build_mode, arch, machine_id = key
+ if test_name.startswith('Single File'):
+ aggr = Mean
+ else:
+ aggr = NormalizedMean
+ normalized_key = (test_name, metric, build_mode, arch)
+ item = self.normalized_data_table.get(normalized_key)
+ if item is None:
+ self.normalized_data_table[normalized_key] = \
+ item = aggr()
+ item.append(indexed_value.getvalue())
+
+ single_file_stage_order = [
+ 'init', 'driver', 'syntax', 'irgen_only', 'codegen', 'assembly']
+ def _build_final_data_tables(self):
+ def invert(values):
+ return [1.0/v for v in values]
+
+ self.grouped_table = {}
+ self.single_file_table = {}
+ for key,normalized_value in self.normalized_data_table.items():
+ test_name, metric, build_mode, arch = key
+
+ # If this isn't a single file test, add a plot for it grouped by
+ # metric and build mode.
+ group_key = (metric, build_mode)
+ if not test_name.startswith('Single File'):
+ items = self.grouped_table[group_key] = self.grouped_table.get(
+ group_key, [])
+
+ items.append((test_name, arch,
+ invert(normalized_value.getvalue())))
+ continue
+
+ # Add to the single file stack.
+ stage_name, = re.match('Single File \((.*)\)', test_name).groups()
+ try:
+ stack_index = self.single_file_stage_order.index(stage_name)
+ except ValueError:
+ stack_index = None
+
+ # If we don't have an index for this stage, ignore it.
+ if stack_index is None:
+ continue
+
+ # Otherwise, add the last value to the single file stack.
+ stack = self.single_file_table.get(group_key)
+ if stack is None:
+ self.single_file_table[group_key] = stack = \
+ [None] * len(self.single_file_stage_order)
+ stack[stack_index] = normalized_value.getvalue()[-1]
+
+ # If this is the last single file stage, also add a plot for it.
+ if stage_name == self.single_file_stage_order[-1]:
+ items = self.grouped_table[group_key] = self.grouped_table.get(
+ group_key, [])
+ values = normalized_value.getvalue()
+ baseline = values[0]
+ items.append(('Single File Tests', arch,
+ invert([v/baseline for v in values])))
Added: lnt/trunk/lnt/server/ui/templates/v4_summary_report.html
URL: http://llvm.org/viewvc/llvm-project/lnt/trunk/lnt/server/ui/templates/v4_summary_report.html?rev=154932&view=auto
==============================================================================
--- lnt/trunk/lnt/server/ui/templates/v4_summary_report.html (added)
+++ lnt/trunk/lnt/server/ui/templates/v4_summary_report.html Tue Apr 17 12:50:04 2012
@@ -0,0 +1,277 @@
+{% extends "layout.html" %}{
+{% set components = [] %}
+{% block head %}
+ <script src="{{ url_for('.static', filename='popup.js') }}"></script>
+ <script language="javascript" type="text/javascript"
+ src="{{ url_for('.static',
+ filename='jquery/1.5/jquery.min.js') }}"> </script>
+ <script language="javascript" type="text/javascript"
+ src="{{ url_for('.static',
+ filename='flot/jquery.flot.min.js') }}"> </script>
+ <script language="javascript" type="text/javascript"
+ src="{{ url_for('.static',
+ filename='flot/jquery.flot.pie.min.js') }}"> </script>
+{% endblock %}
+{% block title %}Summary Report{% endblock %}
+{% block body %}
+
+{# Warn the user if there were a lot of warnings during report generation. #}
+{% if report.warnings|length > 20 %}
+<font color="#F00">
+<p><b>WARNING</b>: Report generation reported {{ report.warnings|length }}
+warnings.</p>
+</font>
+{% endif %}
+
+{% if true %}
+<script type="text/javascript">
+g = {}
+g.tick_list = [{%
+for order in report.report_orders %}
+ [{{ loop.index0 }}, "{{ order[0] }}"],{%
+endfor %}
+];
+g.single_file_stage_order = {{ report.single_file_stage_order|tojson|safe }};
+
+function init() {
+{% for (metric,build_mode),items in report.grouped_table|dictsort %}
+ setup_grouped_plot("{{metric.replace(' ','')}}_{{build_mode}}", {{ items|tojson|safe }});
+{% endfor %}
+
+{% for (metric,build_mode),items in report.single_file_table|dictsort %}
+ setup_single_file_plot("{{metric.replace(' ','')}}_{{build_mode}}", {{ items|tojson|safe }});
+{% endfor %}
+}
+
+function setup_single_file_plot(name, items) {
+ var graph = $("#single_file_" + name);
+ var plots = [];
+
+ for (var i = 0; i != items.length; ++i) {
+ var item = items[i];
+ if (item === null)
+ continue;
+
+ plots.push({ label : g.single_file_stage_order[i],
+ data : item });
+ }
+
+ $.plot(graph, plots, {
+ series: {
+ pie: {
+ show: true }
+ },
+ legend: {
+ show: true },
+ grid: {
+ hoverable: true }});
+
+ // Add tooltips.
+ graph.bind("plothover", function(e,p,i) {
+ update_tooltip(e, p, i, show_pie_tooltip); });
+}
+
+function setup_grouped_plot(name, items) {
+ var graph = $("#grouped_" + name);
+ var plots = [];
+
+ // Build the list of plots.
+ items.sort();
+ for (var i = 0; i != items.length; ++i) {
+ var item = items[i];
+ var test_name = item[0];
+ var arch = item[1];
+ var values = item[2];
+ var plot = [];
+
+ for (var j = 0; j != values.length; ++j) {
+ plot.push([j, 100 * (values[j] - 1)]);
+ }
+
+ plots.push({ label : test_name + " :: " + arch,
+ data : plot });
+ }
+
+ $.plot(graph, plots, {
+ legend: {
+ position: 'nw' },
+ series: {
+ lines: { show: true },
+ points: { show: true } },
+ xaxis: {
+ ticks: g.tick_list },
+ yaxis: {
+ ticks: 5 },
+ grid: {
+ hoverable: true } });
+
+ // Add tooltips.
+ graph.bind("plothover", function(e,p,i) {
+ update_tooltip(e, p, i, show_tooltip); });
+}
+
+// Show our overlay tooltip.
+g.current_tip_point = null;
+function show_tooltip(x, y, item, pos) {
+ var data = item.datapoint;
+ var tip_body = '<div id="tooltip">';
+ tip_body += "<b><u>" + item.series.label + "</u></b><br>";
+ tip_body += "<b>" + g.tick_list[data[0]][1].toString() + "</b>: ";
+ tip_body += data[1].toFixed(2) + "%" + "</div>";
+
+ $(tip_body).css( {
+ position: 'absolute',
+ display: 'none',
+ top: y + 5,
+ left: x + 5,
+ border: '1px solid #fdd',
+ padding: '2px',
+ 'background-color': '#fee',
+ opacity: 0.80
+ }).appendTo("body").fadeIn(200);
+}
+
+function show_pie_tooltip(x, y, item, pos) {
+ var tip_body = '<div id="tooltip">';
+ tip_body += "<b><u>" + item.series.label + "</u></b><br>";
+ tip_body += item.series.percent.toFixed(2) + "%" + "</div>";
+
+ $(tip_body).css( {
+ position: 'absolute',
+ display: 'none',
+ top: pos.pageY + 5,
+ left: pos.pageX + 5,
+ border: '1px solid #fdd',
+ padding: '2px',
+ 'background-color': '#fee',
+ opacity: 0.80
+ }).appendTo("body").fadeIn(200);
+}
+
+// Event handler function to update the tooltop.
+function update_tooltip(event, pos, item, show_fn) {
+ if (!item) {
+ $("#tooltip").remove();
+ g.current_tip_point = null;
+ return;
+ }
+
+ if (!g.current_tip_point || (g.current_tip_point[0] != item.datapoint[0] ||
+ g.current_tip_point[1] != item.datapoint[1])) {
+ $("#tooltip").remove();
+ g.current_tip_point = item.datapoint;
+ show_fn(pos.pageX, pos.pageY, item, pos);
+ }
+}
+
+$(function () { init(); });
+</script>
+
+{% set width='800px' %}
+{% set height='500px' %}
+<table border="1">
+ {% for metric,build_mode in report.grouped_table|sort %}
+ <tr>
+ <td>
+ <center><b>{{metric}} Speedup / {{build_mode}}</b></center>
+ <div id="grouped_{{metric.replace(' ','')}}_{{build_mode}}"
+ style="margin:20px;width:{{width}};height:{{height}};"></div>
+ </td>
+ </tr>
+ {% endfor %}
+ {% for metric,build_mode in report.single_file_table|sort %}
+ <tr>
+ <td>
+ <center><b>{{metric}} Single File Breakdown / {{build_mode}}</b></center>
+ <div id="single_file_{{metric.replace(' ','')}}_{{build_mode}}"
+ style="margin:20px;width:{{width}};height:{{height}};"></div>
+ </td>
+ </tr>
+ {% endfor %}
+</table>
+{% endif %}
+
+<h3>Release Data Table</h3>
+
+{% for (metric,build_mode),items in report.grouped_table|dictsort %}
+<h4>{{ metric }} - {{ build_mode }}</h4>
+<table>
+ <thead>
+ <tr>
+ <th>Name</th><th>Arch</th>
+ {% for order in report.report_orders %}
+ <th>{{order}}</th>
+ {% endfor %}
+ </tr>
+ </thead>
+ {% for test_name,arch,values in items|sort %}
+ <tr>
+ <td>{{test_name}}</td>
+ <td>{{arch}}</td>
+ {% for value in values %}
+ <td>{{ '%.4f' % value }}</td>
+ {% endfor %}
+ </tr>
+ {% endfor %}
+</table>
+{% endfor %}
+
+<h3>Single File Breakdown Data Table</h3>
+
+{% set keys = report.single_file_table.keys()|sort %}
+<table>
+ <thead>
+ <tr>
+ <th>Stage Name</th>
+ {% for metric,build_mode in keys %}
+ <th>{{ build_mode }}</th>
+ {% endfor %}
+ </tr>
+ </thead>
+ {% for stage in report.single_file_stage_order %}
+ {% set stage_index = loop.index0 %}
+ <tr>
+ <td>{{stage}}</td>
+ {% for key in keys %}
+ <td>
+ {% set value = report.single_file_table[key][stage_index] %}
+ {{ '%.4f' % value if value is not none else 'N/A' }}
+ </td>
+ {% endfor %}
+ </tr>
+ {% endfor %}
+</table>
+
+{% if request.args.get('show_normalized_table') %}
+<h3>Normalized Data Table</h3>
+<pre>
+{{ report.normalized_data_table|pprint }}
+</pre>
+{% endif %}
+
+{% if request.args.get('show_indexed_table') %}
+<h3>Indexed Data Table</h3>
+<pre>
+{{ report.indexed_data_table|pprint }}
+</pre>
+{% endif %}
+
+{% if request.args.get('show_raw_table') %}
+<h3>Raw Data Table</h3>
+<pre>
+{{ report.data_table|pprint }}
+</pre>
+{% endif %}
+
+{% if report.warnings %}
+{% if request.args.get('show_warnings') or report.warnings|length > 20 %}
+<h3>Warnings</h3>
+<ul>
+{% for ln in report.warnings %}
+<li><pre>{{ ln }}</pre></li>
+{% endfor %}
+</ul>
+{% endif %}
+{% endif %}
+
+{% endblock %}
Modified: lnt/trunk/lnt/server/ui/views.py
URL: http://llvm.org/viewvc/llvm-project/lnt/trunk/lnt/server/ui/views.py?rev=154932&r1=154931&r2=154932&view=diff
==============================================================================
--- lnt/trunk/lnt/server/ui/views.py (original)
+++ lnt/trunk/lnt/server/ui/views.py Tue Apr 17 12:50:04 2012
@@ -14,6 +14,7 @@
from flask import request
from flask import url_for
+import lnt.util
from lnt.db import perfdb
from lnt.server.ui.globals import db_url_for, v4_url_for
import lnt.server.reporting.analysis
@@ -60,8 +61,6 @@
@db_route('/submitRun', only_v3=False, methods=('GET', 'POST'))
def submit_run():
- from lnt.util import ImportData
-
if request.method == 'POST':
input_file = request.files.get('file')
input_data = request.form.get('input_data')
@@ -110,7 +109,7 @@
#
# FIXME: Gracefully handle formats failures and DOS attempts. We
# should at least reject overly large inputs.
- result = ImportData.import_and_report(
+ result = lnt.util.ImportData.import_and_report(
current_app.old_config, g.db_name, db, path, '<auto>', commit)
return flask.jsonify(**result)
@@ -1056,3 +1055,36 @@
neighboring_runs=neighboring_runs,
graph_plots=graph_plots, legend=legend,
use_day_axis=use_day_axis)
+
+###
+# Cross Test-Suite V4 Views
+
+import lnt.server.reporting.summaryreport
+
+def get_summary_config_path():
+ return os.path.join(current_app.old_config.tempDir,
+ 'summary_report_config.json')
+
+ at db_route("/summary_report", only_v3=False)
+def v4_summary_report():
+ # FIXME: Add a UI for defining the report configuration.
+
+ # Load the summary report configuration.
+ config_path = get_summary_config_path()
+ if not os.path.exists(config_path):
+ return render_template("error.html", message="""\
+You must define a summary report configuration first.""")
+
+ with open(config_path) as f:
+ config = flask.json.load(f)
+
+ # Create the report object.
+ report = lnt.server.reporting.summaryreport.SummaryReport(
+ request.get_db(), config['orders'], config['machine_patterns'],
+ dict((int(key),value)
+ for key,value in config['machines_to_merge'].items()))
+
+ # Build the report.
+ report.build()
+
+ return render_template("v4_summary_report.html", report=report)
More information about the llvm-commits
mailing list