<div dir="ltr">Hi Eric,<div><br></div><div>I like the feature change, but I find the implementation makes it harder to see what the conditions are. I would like:</div><div><br></div><div>0. Use the label TEST ... OUTPUT when showing the output with no failures.</div><div>1. Keep the metrics output in their own section, and with the current conditions. We can change that separately if necessary.</div><div><div><br></div><div>This should leave two boolean conditions to check, both of which can be computed prior to "if shouldShow:..." and then we continue if both are False.</div><div><br></div><div>Also, I would like to see a test that sanity checks for the expected output, and an update to the docs describing the option.</div><div><br></div><div> - Daniel</div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 19, 2015 at 1:32 PM, Eric Fiselier <span dir="ltr"><<a href="mailto:eric@efcs.ca" target="_blank">eric@efcs.ca</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi danalbert, jroelofs, ddunbar,<br>
<br>
Currently I have benchmark tests that I run with LIT that have meaningful output even when the test passes. The aim of this patch is to modify the existing LIT output options in such a way that they can be used to always print the output of a test with minimal changes to how output is currently handled.<br>
<br>
The main change in this patch is that `-v` can be specified multiple times to increase the amount of information that is output. Using `lit -vv` will cause LIT to report a passing tests output (if any is present).<br>
<br>
This patch has a secondary drive-by change that changes when metrics are output. Currently metrics are reported when:<br>
1. The test failed.<br>
2. The test passed and neither '-s' nor '-q' are specified.<br>
<br>
With this patch metrics are reported ONLY when '-s' is not specified and one of the following is true:<br>
1. The test failed.<br>
2. The test passed and '-vv' is specified.<br>
<br>
I'm happy to remove this part of the change but I think it is beneficial to the general readability of the output. I don't think we should consider metrics to be part of the "succinct" output.<br>
<br>
<a href="http://reviews.llvm.org/D8460" target="_blank">http://reviews.llvm.org/D8460</a><br>
<br>
Files:<br>
utils/lit/lit/main.py<br>
<br>
Index: utils/lit/lit/main.py<br>
===================================================================<br>
--- utils/lit/lit/main.py<br>
+++ utils/lit/lit/main.py<br>
@@ -42,7 +42,7 @@<br>
self.progressBar.update(float(self.completed)/self.numTests,<br>
test.getFullName())<br>
<br>
- shouldShow = test.result.code.isFailure or \<br>
+ shouldShow = test.result.code.isFailure or self.opts.showOutput > 1 or \<br>
(not self.opts.quiet and not self.opts.succinct)<br>
if not shouldShow:<br>
return<br>
@@ -55,21 +55,28 @@<br>
print('%s: %s (%d of %d)' % (<a href="http://test.result.code.name" target="_blank">test.result.code.name</a>, test_name,<br>
self.completed, self.numTests))<br>
<br>
- # Show the test failure output, if requested.<br>
- if test.result.code.isFailure and self.opts.showOutput:<br>
- print("%s TEST '%s' FAILED %s" % ('*'*20, test.getFullName(),<br>
- '*'*20))<br>
- print(test.result.output)<br>
- print("*" * 20)<br>
-<br>
- # Report test metrics, if present.<br>
- if test.result.metrics:<br>
- print("%s TEST '%s' RESULTS %s" % ('*'*10, test.getFullName(),<br>
- '*'*10))<br>
- items = sorted(test.result.metrics.items())<br>
- for metric_name, value in items:<br>
- print('%s: %s ' % (metric_name, value.format()))<br>
- print("*" * 10)<br>
+ showMetrics = test.result.metrics and not self.opts.succinct<br>
+ showPassingTest = (self.opts.showOutput > 1<br>
+ and (test.result.output or showMetrics))<br>
+<br>
+ # Show the test output, if requested.<br>
+ if ((test.result.code.isFailure and self.opts.showOutput)<br>
+ or showPassingTest):<br>
+ result_str = 'FAILED' if test.result.code.isFailure else 'RESULT'<br>
+ banner = '*'*20 if test.result.code.isFailure else '*'*10<br>
+ print("%s TEST '%s' %s %s" % (banner, test.getFullName(),<br>
+ result_str, banner))<br>
+ if test.result.output:<br>
+ print(test.result.output)<br>
+<br>
+ # Report test metrics, if present.<br>
+ if showMetrics:<br>
+ if test.result.output:<br>
+ print(banner)<br>
+ items = sorted(test.result.metrics.items())<br>
+ for metric_name, value in items:<br>
+ print('%s: %s ' % (metric_name, value.format()))<br>
+ print(banner)<br>
<br>
# Ensure the output is flushed.<br>
sys.stdout.flush()<br>
@@ -162,7 +169,7 @@<br>
action="store_true", default=False)<br>
group.add_option("-v", "--verbose", dest="showOutput",<br>
help="Show all test output",<br>
- action="store_true", default=False)<br>
+ action="count", default=0)<br>
group.add_option("-o", "--output", dest="output_path",<br>
help="Write test results to the provided path",<br>
action="store", type=str, metavar="PATH")<br>
<br>
EMAIL PREFERENCES<br>
<a href="http://reviews.llvm.org/settings/panel/emailpreferences/" target="_blank">http://reviews.llvm.org/settings/panel/emailpreferences/</a><br>
</blockquote></div><br></div></div></div>