[LLVMdev] load widening conflicts with AddressSanitizer
criswell at illinois.edu
Fri Dec 16 15:29:47 PST 2011
On 12/16/11 4:45 PM, Chris Lattner wrote:
> On Dec 16, 2011, at 2:41 PM, John Criswell wrote:
>> On 12/16/11 4:14 PM, Chris Lattner wrote:
>>> On Dec 16, 2011, at 12:39 PM, Kostya Serebryany wrote:
>>>> > Do we consider the above transformation legal?
>>> Yes, the transformation is perfectly legal for the normal compiler.
>> So how do you guarantee that the behavior is predictable regardless
>> of hardware platform if you don't define what the behavior should be?
> I'm not sure what you mean. What isn't defined?
The alloca in question allocates 22 bytes. The 64-bit load in Kostya's
original email is accessing two additional bytes past the end of the
alloca (i.e., it is accessing array "elements" a and a).
Accessing that memory with a read or write is undefined behavior. The
program could fault, read zeros, read arbitrary bit patterns, etc.
In other words, the compiler is transforming this:
return a + a;
into something like this:
unsigned long * p = &(a);
unsigned long v = *p; // This accesses memory locations a and a;
doing so is undefined behavior
(do some bit fiddling to extra a and a from v)
The original code is memory safe and exhibits defined behavior. You can
do whatever crazy, semantics-preserving optimization you want, run it on
any crazy architecture you want, and it'll always exhibit the same behavior.
The optimized code exhibits undefined behavior. On most systems, it just
reads garbage data that is ignored by the compiler, but that's really
just a side-effect of how most OS's and architectures do things. If you
do some crazy transforms or run on some obscure architecture, the
optimized code may break.
>> What if you have a funky architecture that someone is porting LLVM
>> to, or someone is using x86-32 segments in an interesting way?
> We'll burn that bridge when we get to it ;-)
ASAN got burnt; SAFECode probably got burnt, too. If we work around it,
some poor researcher or developer may get burnt by it, too, and spend
some time figuring out why his correct-looking program is not acting
properly. In other words, you're burning someone else's bridge.
Granted, perhaps the benefits of an incorrect optimization may outweigh
the loss of using LLVM on novel systems, but are you sure that making
the optimization work properly is going to be so detrimental?
>> Moreover, I don't really understand the rationale for allowing a
>> transform to introduce undefined behavior into programs that exhibit
>> no undefined behavior.
> There is no undefined behavior here. This is exactly analogous to the
> code you get for bitfield accesses. If you have an uninitialized
> struct and start storing into its fields (to initialize it) you get a
> series of "load + mask + or + store" operations. These are loading
> and touching "undefined" bits in a completely defined way.
I'll agree that they're both undefined behavior, but I don't think they
fall within the same category. The bit-mask initializing issue is a
compromise you made because there either isn't an alternative way to do
it that has defined behavior, or such an alternative is too expensive
and difficult to implement and/or use.
This appears to be a different case. Fixing the optimization looks
simple enough to me (am I missing something?), and I'm not convinced
that fixing it would hurt performance (although since I haven't run an
experiment, that is conjecture).
So, perhaps I should ask this: if someone took the time to fix the
transform so that it checks both the alignment *and* the allocation size
and measures the resulting performance change, how much would
performance need to suffer before the cure was deemed worse than the
-- John T.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev