[cfe-dev] ccc-analyzer progression/regression

John Smith lbalbalba at gmail.com
Thu Sep 8 06:11:20 PDT 2011


Hi Anna,


Thank you for your kind reply.


1.)
Doing this automatically using a build-bot would of course be way more
convenient and effective than doing things manually. I was unaware
that you are currently setting this up. I could of course still do
something similar manually for projects that arent covered by the
build-bot, but maybe it would be a better idea to just add those to
the automatic process ?
;)
I ran the analyzer on a few projects a way back, but as a non-dev I
didnt immediately know to turn that data into something useful (until
I came up with the idea to compare results from different versions of
the analyzer). The open source projects I tried were: bind, dhcp, gcc,
gdb, glib, ntp, openldap, openssl, postfix. Some of those might be
good candidates for the build-bot ?


2.)
It's not that im not interested in determining which results are
indeed bugs, which are false positives, and maybe even were the false
negatives are... It's just that I dont have the required skills to do
so. Which in turn made me wonder in what ways I could help out as a
non-programmer, and resulted in the email below. Doing this
automatically would of course be preferable.


3.)
Doing something like Coverity's open source scan results would indeed
be the holy-grail. :)
But as a start, it might be a lot easier, but still helpful, if just
the reports would be published on some publicly accessible web server
? There would be no fancy stuff like people
categorizing/reporting/collaborating on stuff, but interested people
would be able to easily compare results from different versions
manually for a particular project. Which might be a good start ?



Anyway, thank you again for your time and reply,


Regards,


John Smith.


On Thu, Sep 8, 2011 at 7:43 AM, Anna Zaks <ganna at apple.com> wrote:
> Hi John,
> I think that tracking the progress of the analyzer is very important.
> Actually, we are in the process of setting up an internal buildbot which
> would regularly analyze several open source projects(Python,
> openssl, postresql, Adium,..). The goal is to be able to spot regressions as
> quickly as possible as well as measure the progress. There are several
> progress metrics which one could use, for example:
> - reduction of analyzer failures/crashes
> - number of new bugs reported / number of false positives removed
> - increase in speed
> - memory consumption reduction
> Going back to your proposal, I think it would be helpful to:
> 1) Run the analyzer on a bunch of projects and report failures if any.
> (Ideally, these would be the projects which are not being tested in our
> buildbot.)
> 2) In regards to the analyzer reports, it would be VERY helpful to
> categorize the (subset of) bugs into real bugs and false positives. An
> increase in the number of bugs reported might mean both improvement and
> regression in the analyzer. (Regression might be due to the analyzer
> reporting more false positives.) Since you are not a programmer, I am not
> sure if you are interested in doing this.
> 3) A much more ambitious project would be setting up an external buildbot,
> which would test the open source projects and, possibly, allow others to
> categorize the reports. It could be something similar to Coverity's open
> source scan results (http://scan.coverity.com/rung2.html).
> Cheers,
> Anna.
> On Sep 6, 2011, at 12:35 PM, John Smith wrote:
>
> Hi.
>
>
> I really love the concept of static analysis, and would like to
> contribute somehow. However, I am not a programmer myself, so i was
> wondering what i could do to contribute to the ccc-analyzer ?
>
> I have the following idea:
>
> 1.) Let clang/ccc-analyzer devs pick a hand full reasonably well known
> / used open source projects
>    preferably code bases that the ccc-analyzer devs are at least
> somewhat familiar with
> 2.) Let me run the analyzer on these code bases with the latest trunk
> clang/ccc-analyzer
> 3.) Let me post the results on a website
> 4.) a while later (some months ?) I could run the latest rev of
> clang/ccc-analyzer on the same versions of the chosen codebases again
>
> Then perhaps the differences in the results of running analyzer rev x
> on codebase y versus running analyzer rev x+1 on the same code base y
> could provide some insight into how well the analyzer is progressing
> on real world code bases ?
>
> The core idea here is not to find issues in the latest version of any
> particular codebase, but rather to discover progression/regression of
> the static analyzer. Of course, this would probably work best if the
> code being analyzed are both 'reasonably well/widely used' real word
> code bases, and the clang/ccc-analyzer devs are fairly familiar with
> these code bases so that they can relatively easily interpret the
> (differences in) results.
>
>
> Just my 2$... Could this be useful ?
>
>
> Let me know what you think ?
>
>
>
> Regards,
>
>
> John Smith.
> _______________________________________________
> cfe-dev mailing list
> cfe-dev at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/cfe-dev
>
>




More information about the cfe-dev mailing list