[PATCH] D78853: [Analysis] Fix null pointer dereference warnings [1/n]

Aaron Puchert via cfe-commits cfe-commits at lists.llvm.org
Wed Apr 29 15:14:49 PDT 2020


Am 29.04.20 um 23:11 schrieb Mandeep Singh Grang:
> My previous email details why we are doing
> this: http://lists.llvm.org/pipermail/llvm-dev/2020-April/141167.html
> Basically, we ran the PREfast static analysis tool on LLVM/Clang and
> it reported a lot of warnings. I guess some of them are false
> positives after all.
Thanks for the link. There is nothing wrong with running additional
tools on LLVM.
> As David suggests that maybe I should validate these warnings by also
> running the clang static analyzer.
You could have a look at http://llvm.org/reports/scan-build/. It's not
terribly up-to-date though, so you might also run it yourself.
> There are a lot of other warnings reported by the tool. Here is the
> full
> list: https://docs.google.com/spreadsheets/d/1h_3tHxsgBampxb7PXoB5lgwiBSpTty9RLe5maIQxnTk/edit?usp=sharing.

In my opinion you're probably not losing a lot by filtering out the
types of issues that the Clang static analyzer can find as well. So for
example, ignore the null dereferences since Clang has essentially the
same check. Of course it could be that the tool finds additional issues,
I can't really say that. But you can see in the report that there quite
a few issues of a similar kind. (This one isn't among them, however.)

For analyzing the issues in the list, it would be good to note the git
commit of your analysis run, otherwise it might be hard to follow the
reports.

> If the community is interested in getting those fixed I can upstream
> patches.

Improvements are always welcome, but unfortunately no static analysis
tool only reports actual issues. Otherwise one could hope that the Clang
findings would have been all fixed by now.

So I think you need to carefully inspect the reports. It's not a bad
idea to start with a random sample of the issues reported by every
check. Avoid stylistic issues like shadowing: people have different
opinions on that. Then try to find out if there is a real issue, or if
you can think of a way to improve the code. (That's often subjective, of
course. But if you can rewrite code a bit to make it more obvious that a
certain bad thing can't happen, you'll find open ears.)

The best thing is of course if you can use the report to construct
failing test cases, but I wouldn't put the bar that high.

Best regards,
Aaron

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-commits/attachments/20200430/7bfbb16b/attachment.html>


More information about the cfe-commits mailing list