[LLVMdev] Integer divide by zero

Cameron McInally cameron.mcinally at nyu.edu
Fri Apr 5 13:42:54 PDT 2013


On Fri, Apr 5, 2013 at 2:40 PM, Joshua Cranmer 🐧 <Pidgeot18 at gmail.com>wrote:
...

> Per C and C++, integer division by 0 is undefined. That means, if it
> happens, the compiler is free to do whatever it wants. It is perfectly
> legal for LLVM to define r to be, say, 42 in this code; it is not required
> to preserve the fact that the idiv instruction on x86 and x86-64 will trap.


This is quite a conundrum to me. Yes, I agree with you on the C/C++
Standards interpretation. However, the x86-64 expectations are orthogonal.
I find that other compilers, including GCC, will trap by default at high
optimization levels on x86-64 for this test case. Hardly scientific, but
every other compiler on our machines issues a trap.

We take safety seriously in our compiler. Our other components go to great
lengths not to constant fold an integer division by zero. We would also
like LLVM to do the same. Are there others in the community who feel this
way?

I can envision an option which preserves faults during constant folding.
E.g. an integer version of -enable-unsafe-fp-math or gfortran's -ffpe-trap.
More accurately, this hypothetical option would suppress the folding of
unsafe integer expressions altogether. Would an option such as this benefit
the community at large?

To be complete, I've also explored the idea of generating a
__builtin_trap() call for such expressions before the IR level. However, I
have not yet convinced myself that this will generate the same fault as the
actual sdiv/udiv instruction would. Things to do.

-Cameron
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130405/ddc211b4/attachment.html>


More information about the llvm-dev mailing list