<p dir="ltr"><br>
On Feb 27, 2016 09:06, "Paul E. McKenney" <<a href="mailto:paulmck@linux.vnet.ibm.com">paulmck@linux.vnet.ibm.com</a>> wrote:<br>
><br>
><br>
> But we do already have something very similar with signed integer<br>
> overflow. If the compiler can see a way to generate faster code that<br>
> does not handle the overflow case, then the semantics suddenly change<br>
> from twos-complement arithmetic to something very strange. The standard<br>
> does not specify all the ways that the implementation might deduce that<br>
> faster code can be generated by ignoring the overflow case, it instead<br>
> simply says that signed integer overflow invoked undefined behavior.<br>
><br>
> And if that is a problem, you use unsigned integers instead of signed<br>
> integers.</p>
<p dir="ltr">Actually, in the case of there Linux kernel we just tell the compiler to not be an ass. We use</p>
<p dir="ltr"> -fno-strict-overflow</p>
<p dir="ltr">or something. I forget the exact compiler flag needed for "the standard is as broken piece of shit and made things undefined for very bad reasons".</p>
<p dir="ltr">See also there idiotic standard C alias rules. Same deal.</p>
<p dir="ltr">So no, standards aren't that important. When the standards screw up, the right answer is not to turn the other cheek. </p>
<p dir="ltr">And undefined behavior is pretty much *always* a sign of "the standard is wrong".</p>
<p dir="ltr"> Linus</p>