[PATCH] D19014: [clang-tidy] Add new checker for suspicious sizeof expressions

Etienne Bergeron via cfe-commits cfe-commits at lists.llvm.org
Wed Apr 13 13:15:43 PDT 2016


etienneb added inline comments.

================
Comment at: clang-tidy/misc/SizeofExpressionCheck.cpp:28
@@ +27,3 @@
+  return Node.getValue().getZExtValue() > N;
+}
+
----------------
alexfh wrote:
> There are no firm rules. It mostly depends on how generic/useful in other tools the matcher could be. This one seems pretty generic and it could be used in a couple of other clang-tidy checks at least (e.g. readability/ContainerSizeEmptyCheck.cpp), so it seems to be a good fit for ASTMatchers.h. This can also be done as a separate step, if you prefer.
I'm not against at all to lift this.
Now, or in a separate step.

I just leave it here so the patch is still working as-is (if people want to play with it and found invalid cases!).

================
Comment at: clang-tidy/misc/SizeofExpressionCheck.cpp:197
@@ +196,3 @@
+      this);
+}
+
----------------
alexfh wrote:
> I see the reasoning, but in this case I'd still prefer if this custom matcher looked at the children instead of the parents, since it's a more effective approach in general (at least because it doesn't use the parent map).
> 
> We should probably add a variant of `hasDescendant` (and maybe `hasAncestor`) with a way to limit which nodes it can look through.
I was thinking it's more effective to look to ancestor than descendants?
There are less ancestors than descendants?

================
Comment at: clang-tidy/misc/SizeofExpressionCheck.cpp:198
@@ +197,3 @@
+}
+
+void SizeofExpressionCheck::check(const MatchFinder::MatchResult &Result) {
----------------
alexfh wrote:
> I'm probably wrong about "a more effective approach in general", but for `sizeof` case it might be more effective, if we assume that `sizeof` doesn't have large subtrees under it.
> 
> A separate concern is that the matchers looking up the tree are somewhat more difficult to understand (at least to me).
> 
> Ultimately, I'm not sure what's the best approach here. Maybe find a huge file and measure runtime of both implementations? ;)
I'm uploading the top-down approach.
As I suspected, this is changing nothing on large cases I tried.

I put the limit of 8 level depth, which won't cause any combinatory explosion.
Worse case: 2^8 nodes.



http://reviews.llvm.org/D19014





More information about the cfe-commits mailing list