[cfe-dev] -Wunreachable-code and templates

David Blaikie dblaikie at gmail.com
Thu Feb 16 22:41:16 PST 2012

On Wed, Feb 15, 2012 at 10:34 AM, Ted Kremenek <kremenek at apple.com> wrote:
> On Feb 14, 2012, at 4:35 PM, David Blaikie <dblaikie at gmail.com> wrote:
> So I've come up with some numbers for feedback.
> Experiment:
> * Take the attached templates.cpp (it includes every C++11 header
> clang can compile (I was using libstdc++ for this), plus a bunch of
> boost headers - and a little hand-written sanity check)
> * preprocess it (with -std=c++11)
> * strip out the line directives (so that clang won't suppress all the
> diagnostics because they're in system headers)
> * using "perf stat -r100":
>  * run with -Wno-everything and either with or without
> -Wunreachable-code -c -std=c++11
>  * run the same thing with the template_unreachable.diff applied
> (I did this at runlevel 1 to try to reduce some of the noise/background)
> This is basically the worst case I can think of - lots of templated
> code, almost no instantiations or non-templated code.
> Results
> * you should observe relatively similar results to mine in results.txt
> except:
>  * I pruned the 100 repetitions of diagnostics from the results file,
> leaving only one sample of the diagnostic output from the two
> -Wunreachable-code runs (with & without my patch applied)
>  * I added the check to not build the CFG if we're in a dependent
> context and -Wunreachable-code is /not/ enabled after I'd already run
> my perf numbers, so the discrepency in my results between the two
> versions without the flag enabled should probably not be there (I can
> rerun that to demonstrate if desired)
> & these were the execution time results I got:
> No patch
>  No warning: 1.915069815 seconds ( +-  0.02% )
>  Warning (10 found): 1.923400323 seconds ( +-  0.02% )
> With patch
>  No warning: 1.937073564 seconds ( +-  0.03% ) (this should probably
> be closer to/the same as the first result - it was just run without
> the shortcut as called out above)
>  Warning (20 found - including my sanity check): 1.980802759 seconds
> ( +-  0.03% )
> So about a 3% slow down (in the enabled case), according to this
> experiment which is a pretty extreme case.
> What do you reckon? Good? Bad? Need further data/better experiments?
> Hi David,
> Thanks for doing these measurements.  I think we need to do more
> investigation.
> If you told me that we have the distinct possibility of incurring a 3%
> compile time regression, I'd say that this warning would *never* be on by
> default.  A 3% regression is *huge* for a single warning.  Your
> measurements, however, are completely biased by your test case.  I think
> this test shows that the analysis doesn't become ridiculously slow in the
> worst case (a good property), but it doesn't tell me what the real
> performance regression is going to be on normal code.  For example, what is
> the build time impact on the LLVM codebase,

More numbers. I've done 10 builds of LLVM+Clang of each of the 4
variations & come up with the following results:

1) Without unreachable template analysis
1.1) Without -Wunreachable-code: 1924.114592416 seconds ( +-  0.01% )
1.2) With -Wunreachable-code: 1926.877304953 seconds ( +-  0.01% )
2) With unreachable template analysis
2.1) Without -Wunreachable-code: 1925.351616127 seconds ( +-  0.01% )
2.2) With -Wunreachable-code: 1932.952966490 seconds ( +-  0.00% )

(& for those running the numbers at home - that's around 21.5 hours ;))

That's a 0.46% degredation from 1.1 (old code, -Wno-unreachable-code)
to 2.2 (-Wunreachable-code with templates) and a 0.32% degredation
from -Wunreachable-code without templates to with templates.

Honestly I'm rather surprised perf-stat's reported standard deviation
between builds was as low as it was (I even wrote a silly program to
randomly sleep for 1 second or not sleep - just to see if I could
elicit a high variation number from perf stat - and it did report what
I would expect, roughly, from that - still wish it'd give me the raw
numbers in a more consumable form... but maybe I'll trust it more in

I can try a Chromium build too, if you like.

Another thing to consider is that we could put this case under a
sub-flag if we wanted to.

As for the warnings - examining all of LLVM+Clang the total current
-Wunreachable-code warnings are:
Excluding templates: 1252
Including templates: 1272

Of the 20 cases found, 4 (in SparseBitVector.h) were false positives
due to sizeof and the other 16 were fairly mundane legitimate cases
due to return/break directly after llvm_unreachable, etc (probably due
to Craig Topper's recent changes from assert(0) -> llvm_unreachable
making more code unreachable. I'll do another pass & checkin fixes to
the legitimate cases (within & outside templates) soon-ish).

The results.tgz archive contains various details:
results_trimmed.txt - stripped most of the make/build output & the
duplicate batches of diagnostics, so this includes one set of
diagnostics and the perf-stat results for each of the 4 batches.
results_no_templates.txt/results_templates.txt - the diagnostics from
building (once) with -Wunreachable-code with and without my change.
results_no_templates_summary.txt/results_templates_summary.txt - the
sorted & uniqued warning lines from the above - showing the individual
diagnostics that case found (without duplicates either from header
inclusion or from the multiple builds that were run)

I hope this helps,
- David

> Chrome, or building code that
> uses Boost?  That's the kind of measurement I think we need to see.  There's
> going to be a lot more variance there, but at the end of the day these
> measurements don't tell me the performance regression I should expect to see
> from this warning.
> As for the warnings found, what was the false positive rate?  We're these
> real issues?
> Cheers,
> Ted
-------------- next part --------------
A non-text attachment was scrubbed...
Name: results.tgz
Type: application/x-gzip
Size: 87926 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20120216/c5b7c8c2/attachment.bin>

More information about the cfe-dev mailing list