[llvm-commits] [zorg] r119893 - /zorg/trunk/lnt/lnt/viewer/simple.ptl

Daniel Dunbar daniel at zuster.org
Fri Nov 19 18:28:24 PST 2010


Author: ddunbar
Date: Fri Nov 19 20:28:24 2010
New Revision: 119893

URL: http://llvm.org/viewvc/llvm-project?rev=119893&view=rev
Log:
lnt.viewer.simple: Don't use comparison run (std.dev. estimation) heuristics by
default.
 - Although these generally worked, the problem is that they are asymmetric in
   terms of what gets reported, and harder to understand.

 - For now, return to the old magic 5% cut-off for treating something as
   significant.

 - I plan to move to an adaptive sampling mechanism to provide a more robust
   solution.

Modified:
    zorg/trunk/lnt/lnt/viewer/simple.ptl

Modified: zorg/trunk/lnt/lnt/viewer/simple.ptl
URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/lnt/lnt/viewer/simple.ptl?rev=119893&r1=119892&r2=119893&view=diff
==============================================================================
--- zorg/trunk/lnt/lnt/viewer/simple.ptl (original)
+++ zorg/trunk/lnt/lnt/viewer/simple.ptl Fri Nov 19 20:28:24 2010
@@ -473,16 +473,17 @@
         try:
             num_comparison_runs = int(form['num_comparison_runs'])
         except:
-            num_comparison_runs = 5
+            num_comparison_runs = 0
 
         self.renderPopupBegin('view_options', 'View Options', True)
         form.render()
         self.renderPopupEnd()
 
-        _, text_report, html_report = NTEmailReport.getSimpleReport(None,
-            db, run, str("%s/db_%s/") % (self.root.config.zorgURL,
-                                         self.root.dbName),
-            True, True, only_html_body = True, show_graphs = show_graphs)
+        _, text_report, html_report = NTEmailReport.getSimpleReport(
+            None, db, run, str("%s/db_%s/") % (self.root.config.zorgURL,
+                                               self.root.dbName),
+            True, True, only_html_body = True, show_graphs = show_graphs,
+            num_comparison_runs = num_comparison_runs)
         self.renderPopupBegin('text_report', 'Report (Text)', True)
         """
         <pre>%s</pre>""" % (text_report,)
@@ -498,7 +499,7 @@
             interesting_runs.append(compare_to.id)
         test_names = ts_summary.get_test_names_in_runs(db, interesting_runs)
 
-        # Gather the runs to use for statistical data.
+        # Gather the runs to use for statistical data, if enabled.
         cur_id = run.id
         comparison_window = []
         for i in range(num_comparison_runs):





More information about the llvm-commits mailing list