[llvm-dev] GEP with a null pointer base
Peter Lawrence via llvm-dev
llvm-dev at lists.llvm.org
Fri Jul 21 22:44:58 PDT 2017
Mehdi,
Hal’s transformation only kicks in in the *presence* of UB, and
it does not matter how that UB got there, whether by function inlining
or without function inlining.
The problem with Hal’s argument is that the compiler does not have
a built in ouija board with which it can conjure up the spirit of the
author of the source code and find out if the UB was intentional
with the expectation of it being deleted, or is simply a bug.
Function inlining does not magically turn a bug into not-a-bug, nor
does post-inlining simplification magically turn a bug into not-a-bug.
Let me say it again: if the compiler can find this UB (after whatever
optimizations it takes to get there) then the static analyzer must
be able to do the same thing, forcing the programmer to fix it
rather than have the compiler optimize it.
Or, to put it another way: there is no difference between a compiler
and a static analyzer [*]. So regardless of whether it is the compiler or
the static analyzer that finds any UB, the only rational thing to do with
it is report it as a bug.
Peter Lawrence.
[* in fact that’s one of the primary reasons Apple adopted llvm, to use
It as a base for static analysis]
> On Jul 21, 2017, at 10:03 PM, Mehdi AMINI <joker.eph at gmail.com> wrote:
>
>
>
> 2017-07-21 21:27 GMT-07:00 Peter Lawrence <peterl95124 at sbcglobal.net <mailto:peterl95124 at sbcglobal.net>>:
> Sean,
> Let me re-phrase a couple words to make it perfectly clear
>
>> On Jul 21, 2017, at 6:29 PM, Peter Lawrence <peterl95124 at sbcglobal.net <mailto:peterl95124 at sbcglobal.net>> wrote:
>>
>> Sean,
>>
>> Dan Gohman’s “transform” changes a loop induction variable, but does not change the CFG,
>>
>> Hal’s “transform” deletes blocks out of the CFG, fundamentally altering it.
>>
>> These are two totally different transforms.
>>
>>
>> And even the analysis is different,
>>
>> The first is based on an *assumption* of non-UB (actually there is no analysis to perform)
> the *absence* of UB
>>
>> the second Is based on a *proof* of existence of UB (here typically some non-trivial analysis is required)
> the *presence* of UB
>
>> These have, practically speaking, nothing in common.
>>
>
>
> In particular, the first is an optimization, while the second is a transformation that
> fails to be an optimization because the opportunity for it happening in real world
> code that is expected to pass compilation without warnings, static analysis without
> warnings, and dynamic sanitizers without warnings, is zero.
>
> Or to put it another way, if llvm manages to find some UB that no analyzer or
> sanitizer does, and then deletes the UB, then the author of that part of llvm
> is in the wrong group, and belongs over in the analyzer and/or sanitizer group.
>
> I don't understand your claim, it does not match at all my understand of what we managed to get on agreement on in the past.
>
> The second transformation (dead code elimination to simplify) is based on the assumption that there is no UB.
>
> I.e. after inlining for example, the extra context of the calling function allows us to deduce the value of some conditional branching in the inline body based on the impossibility of one of the path *in the context of this particular caller*.
>
> This does not mean that the program written by the programmer has any UB inside.
>
> This is exactly the example that Hal gave.
>
> This can't be used to expose any meaningful information to the programmer, because it would be full of false positive. Basically a program could be clean of any static analyzer error, of any UBSAN error, and totally UB-free, and still exhibit tons and tons of such issues.
>
> --
> Mehdi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170721/3be0562f/attachment-0001.html>
More information about the llvm-dev
mailing list