[cfe-dev] [analyzer] Adding build bot for static analyzer reference results

David Blaikie via cfe-dev cfe-dev at lists.llvm.org
Mon Sep 28 16:33:35 PDT 2015

On Mon, Sep 28, 2015 at 4:25 PM, Devin Coughlin via cfe-dev <
cfe-dev at lists.llvm.org> wrote:

> Hi all,
> We’re planning to add a public Apple build bot for the static analyzer to
> Green Dragon (http://lab.llvm.org:8080/green/). I’d like to get your
> feedback on our proposed approach.
> The goal of this bot is to catch unexpected analyzer regressions, crashes,
> and coverage loss by periodically running the analyzer on a suite of
> open-source benchmarks. The bot will compare the produced path diagnostics
> to reference results. If these do not match, we will e-mail the committers
> and a small set of interested people. (Let us know if you want to be
> notified on every failure.) We’d like to make it easy for the community to
> respond to analyzer regressions and update the reference results.
> We currently have an Apple-internal static analyzer build bot and have
> found it helpful for catching mistakes that make it past the normal tests.
> The main downside is that the results need to be updated when new checks
> are added or the analyzer output changes.
> We propose taking a “curl + cache” approach to benchmarks. That is, we
> won’t store the benchmarks themselves in a repository. Instead, the bots
> will download them from the projects' websites and cache locally.

If we're going to be downloading things from external sources, those
sources could be changing, no? Or will we pin to a specific version - if
we're pinning to a specific version, what's the benefit to taking an
external dependency like that (untrusted, may be down when we need it,
etc), compared to copying the files permanently & checking them in to
clang-tests (or clang-tests-external) as I did for GDB?

If we're interested in catching regressions in both the external code and
our code (which I'm interested in doing for GDB, but haven't had time) I
can see why it'd make sense to track ToT of both projects, but that's a bit
of a different goal - and we'd probably want someone to triage those before
mailing developers who committed the changes. (because many regressions
will be due to the external project changing, not the  LLVM developer's
change causing a regression)

> If we need to change the benchmarks (to get them to compile with newer
> versions of clang, for example) we will represent these changes as patch
> sets which will be applied to the downloaded version. Both these patch sets
> and the reference results will be checked into the llvm.org/zorg
> repository so anyone with commit access will be able to update them. The
> bot will use the CmpRuns.py script (in clang’s utils/analyzer/) to compare
> the produced path diagnostic plists to the reference results.
> We’d very much appreciate feedback on this proposed approach. We’d also
> like to solicit suggestions for benchmarks, which we hope to grow over
> time. We think sqlite, postgresql, openssl, and Adium (for Objective-C
> coverage) are good initial benchmarks — but we’d like to add C++ benchmarks
> as well (perhaps LLVM?).
> Devin Coughlin
> Apple Program Analysis Team
> _______________________________________________
> cfe-dev mailing list
> cfe-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20150928/50497cb2/attachment.html>

More information about the cfe-dev mailing list