<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hey Alexander,</p>
<p>I did work towards a clang-tidy buildbot some time ago but
unfortunatly had to stop because of time constraints.</p>
<p>My setup was(still is) along these lines:</p>
<p>1. build latest clang-tidy as a docker image<br>
2. make a project-image, compile the project as it wants and
create the "compile_commands.json"<br>
3. run each check-category over the whole project<br>
- deduplicating diagnostics, as the output is massive for some
project/check combinations (~GBs for one run)<br>
- applying fixes<br>
- checking if the project still compiles after fixing<br>
- displaying `git diff` to be able to see what has been fixed
and to find potential breaking changes<br>
- optionally silencing some checks, like style-related stuff
that is just too noisy.<br>
</p>
<p>I did run this over ~10 Projects (blender, opencv, llvm, curl,
...) privatly while developing. In principle this buildbot works,
but I would refine some parts,<br>
especially moving towards the mono-repo. Improving how to access
the diagnostics and the diff would help as well, I guess. Its
kinda prototypish still.</p>
<p>If there is more interest I can publish my current work to github
or so and we can setup a buildbot. I would definitly contribute
with a worker machine,<br>
and the docker-based approach should help setting up more workers
for more projects.<br>
The only obstacle I see personally, is the massive output some
projects generate and how to make something useful out of it.</p>
<p>IMHO it should not run on the main buildbot we have, as for
example "does still compile" is rarely the case when fixing is
activated (unfortunatly), so the<br>
easy ways to measure success are not feasable to use.</p>
<p>Best Regards, Jonas<br>
</p>
<div class="moz-cite-prefix">Am 26.04.19 um 11:06 schrieb Alexander
Zaitsev via cfe-dev:<br>
</div>
<blockquote type="cite"
cite="mid:9b404ef6-c7b1-eccc-6611-a476ba3ca5bf@tut.by">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<p>Hi!</p>
<p>For Clang Static Analyzer (CSA) and for Clang Tidy there are a
lot of unit-tests. That's great, but from my point of view
running CSA and clang tidy <b>regularly </b>on some set of
real life projects and comparing the results between runs is
also important to check any regressions.</p>
<p>Now I know that for CSA there are set of useful scripts in
'clang/utils/analyzer' but didn't find anything similar for
clang-tidy.<br>
</p>
<p>My questions are:<br>
</p>
<ol>
<li>Are there any buildbots which test CSA and/or Clang Tidy on
some set of projects regularly? If there are not any buildbots
- is it possible to setup it?<br>
</li>
<li>I didn't find anything about regression testing in official
documentation. I think a note about regression testing has to
be added to the documentation: how it works now, how to it for
your code, etc. Now we have just small note about running
unit-tests - <a
href="https://clang-analyzer.llvm.org/checker_dev_manual.html#testing"
moz-do-not-send="true">https://clang-analyzer.llvm.org/checker_dev_manual.html#testing</a></li>
</ol>
<p>Thank you.<br>
</p>
<pre class="moz-signature" cols="72">--
Best regards,
Alexander Zaitsev</pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">_______________________________________________
cfe-dev mailing list
<a class="moz-txt-link-abbreviated" href="mailto:cfe-dev@lists.llvm.org">cfe-dev@lists.llvm.org</a>
<a class="moz-txt-link-freetext" href="https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev">https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev</a>
</pre>
</blockquote>
</body>
</html>