[PATCH] D29369: [ubsan] Omit superflous overflow checks for promoted arithmetic (PR20193)

Will Dietz via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Tue Feb 14 10:20:51 PST 2017


dtzWill added a comment.

In https://reviews.llvm.org/D29369#673064, @vsk wrote:

> In https://reviews.llvm.org/D29369#672166, @dtzWill wrote:
>
> > After some thought, can we discuss why this is a good idea?
>
>
> The goal is to lower ubsan's compile-time + instrumentation overhead at -O0, since this reduces the friction of debugging a ubsan-instrumented project.


Apologies for the delay, thank you for the explanation.

> 
> 
>> This increases the cyclomatic complexity of code that already is difficult to reason about, and seems like it's both brittle and out-of-place in CGExprScalar.
> 
> Are there cleanups or ways to reorganize the code that would make this sort of change less complex / brittle? I'm open to taking that on.

None that I see immediately (heh, otherwise I'd be working on them myself...) but the code paths for trapping/non-trapping are particularly what I meant re:complexity, and while I suppose the AST or its interface is probably unlikely to change much (?) I'm concerned about these checks silently removing checks they shouldn't in the future.

(Who would notice if this happened?)

> 
> 
>> It really seems it would be better to let InstCombine or some other analysis/transform deal with proving checks redundant instead of attempting to do so on-the-fly during CodeGen.
> 
> -O1/-O2 do get rid of a lot of checks, but they also degrade the debugging experience, so it's not really a solution for this use case.

Understood, that makes sense.

Would running InstCombine (and only InstCombine):

1. Fail to remove any checks elided by this change?
2. Have a negative impact on debugging experience? For this I'm mostly asking for a guess, I don't know how to exactly quantify this easily.

(3) Have an undesirable impact on compilation time or other negative consequence?)

> 
> 
>> Can you better motivate why this is worth these costs, or explain your use case a bit more?
> 
> I have some numbers from LNT. I did a pre-patch and post-patch run at -O0 + -fsanitize=signed-integer-overflow,unsigned-integer-overflow. There were 4,672 object files produced in each run. This patch brings the average object size down from 36,472.0 to 36,378.3 bytes (a 0.26% improvement), and the average number of overflow checks per object down from 66.8 to 66.2 (a 0.81% improvement).

Wonderful, thank you for producing and sharing these numbers.  Those improvements don't convince me, but if you're saying this is important to you and your use-case/users I'm happy to go with that.

> I don't have reliable compile-time numbers, but not emitting IR really seems like a straightforward improvement over emitting/analyzing/removing it.

Hard to say.  Separation of concerns is important too, but of course there's trade-offs everywhere :).  I'd suspect this doesn't change compile-time much either way.

> So, those are the benefits. IMO getting close to 1% better at reducing instrumentation overhead is worth the complexity cost here.

Couldn't say, but that sounds reasonable to me and I don't mean to stand in the way of progress!

Can you answer my questions about InstCombine above? Thanks!


https://reviews.llvm.org/D29369





More information about the cfe-commits mailing list