[llvm-dev] Change in optimisation with UB in mind
ORiordan, Martin via llvm-dev
llvm-dev at lists.llvm.org
Fri Sep 29 04:34:25 PDT 2017
With LLVM v5.0, I found failures in some of the 'gcc.c-torture/execute' tests due to a change in the optimisation where undefined behaviour is involved. The tests that fail are the '20040409-[123].c' tests.
The underlying failure is due to the optimisation of the following:
int test2(int x) { return x + INT_MIN; }
from using an ADD instruction to using an OR instruction. The optimisation is entirely valid and will produce the correct result for all correct values of 'x', but it introduces a change for architectures that have support for detecting integer overflow/underflow (signalling or quiet).
For many systems the execution cost of ADD versus OR is the same, so there is no real reason to choose one versus the other, and UB is UB either way. However, it does impact detection of out-of-range integer arithmetic for systems that have support for this.
Is there a new trait in the TTI, or a call-back elsewhere that allows the target to choose whether it wants this optimisation to use an ADD or an OR (possibly in the cost-models)? I would generally prefer to use ADD versus OR if the cost of execution is the same.
Thanks,
MartinO
--------------------------------------------------------------
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170929/abbd93dc/attachment.html>
More information about the llvm-dev
mailing list