[cfe-dev] ccc-analyzer progression/regression

Anna Zaks ganna at apple.com
Wed Sep 7 22:43:16 PDT 2011


Hi John,

I think that tracking the progress of the analyzer is very important. 

Actually, we are in the process of setting up an internal buildbot which would regularly analyze several open source projects(Python, openssl, postresql, Adium,..). The goal is to be able to spot regressions as quickly as possible as well as measure the progress. There are several progress metrics which one could use, for example:
- reduction of analyzer failures/crashes
- number of new bugs reported / number of false positives removed
- increase in speed 
- memory consumption reduction

Going back to your proposal, I think it would be helpful to:
1) Run the analyzer on a bunch of projects and report failures if any. (Ideally, these would be the projects which are not being tested in our buildbot.)
2) In regards to the analyzer reports, it would be VERY helpful to categorize the (subset of) bugs into real bugs and false positives. An increase in the number of bugs reported might mean both improvement and regression in the analyzer. (Regression might be due to the analyzer reporting more false positives.) Since you are not a programmer, I am not sure if you are interested in doing this.
3) A much more ambitious project would be setting up an external buildbot, which would test the open source projects and, possibly, allow others to categorize the reports. It could be something similar to Coverity's open source scan results (http://scan.coverity.com/rung2.html).

Cheers,
Anna.

On Sep 6, 2011, at 12:35 PM, John Smith wrote:

> Hi.
> 
> 
> I really love the concept of static analysis, and would like to
> contribute somehow. However, I am not a programmer myself, so i was
> wondering what i could do to contribute to the ccc-analyzer ?
> 
> I have the following idea:
> 
> 1.) Let clang/ccc-analyzer devs pick a hand full reasonably well known
> / used open source projects
>    preferably code bases that the ccc-analyzer devs are at least
> somewhat familiar with
> 2.) Let me run the analyzer on these code bases with the latest trunk
> clang/ccc-analyzer
> 3.) Let me post the results on a website
> 4.) a while later (some months ?) I could run the latest rev of
> clang/ccc-analyzer on the same versions of the chosen codebases again
> 
> Then perhaps the differences in the results of running analyzer rev x
> on codebase y versus running analyzer rev x+1 on the same code base y
> could provide some insight into how well the analyzer is progressing
> on real world code bases ?
> 
> The core idea here is not to find issues in the latest version of any
> particular codebase, but rather to discover progression/regression of
> the static analyzer. Of course, this would probably work best if the
> code being analyzed are both 'reasonably well/widely used' real word
> code bases, and the clang/ccc-analyzer devs are fairly familiar with
> these code bases so that they can relatively easily interpret the
> (differences in) results.
> 
> 
> Just my 2$... Could this be useful ?
> 
> 
> Let me know what you think ?
> 
> 
> 
> Regards,
> 
> 
> John Smith.
> _______________________________________________
> cfe-dev mailing list
> cfe-dev at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/cfe-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20110907/8e1fcbe7/attachment.html>


More information about the cfe-dev mailing list