<div dir="ltr">I would just use addMetric for this. I see a binary signature as just another kind of metric data, sometimes people even use the term "measure" to the process of computing a hash signature for a binary.<div><br></div><div>If you really dislike the term metric, I would probably rather just change the one method name than have two methods that basically do the same job.</div><div><br></div><div>Also, +1 on getting test suite support for reporting binary hashes!</div><div><br></div><div> - Daniel</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 3, 2016 at 5:38 PM, Matthias Braun <span dir="ltr"><<a href="mailto:matze@braunis.de" target="_blank">matze@braunis.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">MatzeB created this revision.<br>
MatzeB added reviewers: ddunbar, cmatthews, EricWF.<br>
MatzeB added a subscriber: llvm-commits.<br>
MatzeB set the repository for this revision to rL LLVM.<br>
Herald added a subscriber: mcrosier.<br>
<br>
The test-suite will use this to attach additional infos like the md5sum<br>
of the executable to the test result. We should not use the existing<br>
addMetric() interface for this as an md5sum is not really a metric.<br>
<br>
Repository:<br>
rL LLVM<br>
<br>
<a href="http://reviews.llvm.org/D16871" rel="noreferrer" target="_blank">http://reviews.llvm.org/D16871</a><br>
<br>
Files:<br>
utils/lit/lit/Test.py<br>
utils/lit/lit/main.py<br>
<br>
Index: utils/lit/lit/main.py<br>
===================================================================<br>
--- utils/lit/lit/main.py<br>
+++ utils/lit/lit/main.py<br>
@@ -94,19 +94,19 @@<br>
# Encode the tests.<br>
data['tests'] = tests_data = []<br>
for test in run.tests:<br>
- test_data = {<br>
- 'name' : test.getFullName(),<br>
- 'code' : <a href="http://test.result.code.name" rel="noreferrer" target="_blank">test.result.code.name</a>,<br>
- 'output' : test.result.output,<br>
- 'elapsed' : test.result.elapsed }<br>
+ result = test.result<br>
+ result.data['name'] = test.getFullName()<br>
+ result.data['code'] = <a href="http://result.code.name" rel="noreferrer" target="_blank">result.code.name</a><br>
+ result.data['output'] = result.output<br>
+ result.data['elapsed'] = result.elapsed<br>
<br>
# Add test metrics, if present.<br>
- if test.result.metrics:<br>
- test_data['metrics'] = metrics_data = {}<br>
- for key, value in test.result.metrics.items():<br>
+ if result.metrics:<br>
+ result.data['metrics'] = metrics_data = {}<br>
+ for key, value in result.metrics.items():<br>
metrics_data[key] = value.todata()<br>
<br>
- tests_data.append(test_data)<br>
+ tests_data.append(result.data)<br>
<br>
# Write the output.<br>
f = open(output_path, 'w')<br>
Index: utils/lit/lit/Test.py<br>
===================================================================<br>
--- utils/lit/lit/Test.py<br>
+++ utils/lit/lit/Test.py<br>
@@ -126,6 +126,8 @@<br>
self.elapsed = elapsed<br>
# The metrics reported by this test.<br>
self.metrics = {}<br>
+ # Additional data reported by this test.<br>
+ self.data = {}<br>
<br>
def addMetric(self, name, value):<br>
"""<br>
@@ -144,6 +146,15 @@<br>
raise TypeError("unexpected metric value: %r" % (value,))<br>
self.metrics[name] = value<br>
<br>
+ def addData(self, name, value):<br>
+ """<br>
+ Attach additional data to the test result.<br>
+<br>
+ The data must be encodable as JSon (strings, numbers, lists,<br>
+ dictionaires)<br>
+ """<br>
+ self.data[name] = value<br>
+<br>
# Test classes.<br>
<br>
class TestSuite:<br>
<br>
<br>
</blockquote></div><br></div>