[cfe-dev] Proposal: Integrate static analysis test suites

Devin Coughlin via cfe-dev cfe-dev at lists.llvm.org
Sun Jan 3 13:02:01 PST 2016


Hi Alexander,

This sounds like an exciting project.

> On Jan 2, 2016, at 12:45 PM, <Alexander G. Riccio> <test35965 at gmail.com> <Alexander G. Riccio> wrote:
> 
> Devin has started writing scripts for running additional analyzer tests as described in this thread:
> 
> A buildbot sounds like the perfect idea! 
> 
> The idea was to check out the tests/projects from the existing repos instead of copying them. Would it be possible to do the same with these tests?
> 
> Eh? What do you mean? Would that stop someone from running them in the clang unit test infrastructure?

Yes. There are separate build bot scripts to detect regressions on real-world projects. These scripts (in clang/utils/analyzer/) run `scan-build` on projects and compare the analysis results to expected reference results, causing an internal build bot to fail when there is a difference. We use these scripts to maintain coverage on real-world code, where analysis time is often much too long to run as part of clang’s normal hand-crafted, minimized regression tests. As Anna alluded to above, these scripts can also be used to avoid checking in benchmark source code to the reference results repository. Apple uses these scripts internally to detect analyzer regressions — and we will be adding a public-facing build bot to Green Dragon <http://lab.llvm.org:8080/green/ <http://lab.llvm.org:8080/green/>> in the relatively near future with reference results checked into a public llvm repository.

As to whether these tests should be run as part of clang’s regular regression tests or as a separate build bot, I think there are two key questions:

1) Are the tests licensed under the UIUC license? Any code contributed to clang needs to be under the UIUC license. If these tests are not, we can use the download-and-patch strategy that Anna mentioned. The scripts in clang/utils/analyzer/ would probably be useful here — although you might have to write a harness to build the tests if one does not already exist.
2) How long does it take to run these tests? If it takes minutes, they are probably better suited to running on a separate build bot.

> Is there any way to treat static analyzer warnings as plain old warnings/errors? Dumping them to a plist file from a command line compilation is a bit annoying, and I think is incompatible with the clang unit testing infrastructure?

The tests in clang/tests/Analysis use the same lit.py-based infrastructure as the rest of clang and can use the same “ // expect-warning {{…}}” annotations — so in general, there is no need to dump to a plist. We do dump plists in some cases to test that proper path diagnostics are being generated (see, for example, test/Analysis/null-deref-path-notes.m), but for most tests these aren’t needed.

Devin

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20160103/89e3bc72/attachment.html>


More information about the cfe-dev mailing list