[PATCH] D20933: Preallocate ExplodedNode hash table

Ben Craig via cfe-commits cfe-commits at lists.llvm.org
Tue Jun 7 14:02:17 PDT 2016


bcraig added a comment.

tldr; I'm going to recommend we stick with the current algorithm, but I only have partial data to back that up.

For the pile of LLVM projects that I am currently building (llvm, clang, libcxx, libcxxabi), 18.9% of all analyzed functions hit the maximum step count.  For the previously discussed large .C file, 37% of the analyzed functions hit the maximum step count.  These are using the stats "AnalysisConsumer - The # of functions and blocks analyzed (as top level with inlining turned on)." and "CoreEngine - The # of times we reached the max number of steps.".

The average number of steps is 34,447.  My best guess at a median given the translation unit level statistics is around 8,050 steps.

If we were to use the average number of steps to size the exploded graph, we would get a 256k buffer.  We would also cause the (already slowest) ~20% of our functions to rehash (twice!).

There is some data that I wish I had, but don't.  I wish I had per-function step counts, so that I could present a histogram of the data.  Then we would know what % of functions would be rehashed for a given value.


http://reviews.llvm.org/D20933





More information about the cfe-commits mailing list