[LLVMdev] Integer divide by zero

Joe Groff arcata at gmail.com
Fri Apr 5 15:49:14 PDT 2013


On Fri, Apr 5, 2013 at 1:42 PM, Cameron McInally
<cameron.mcinally at nyu.edu>wrote:

> This is quite a conundrum to me. Yes, I agree with you on the C/C++
> Standards interpretation. However, the x86-64 expectations are orthogonal.
> I find that other compilers, including GCC, will trap by default at high
> optimization levels on x86-64 for this test case. Hardly scientific, but
> every other compiler on our machines issues a trap.
>

The platform is irrelevant; division by zero is undefined, and compilers
are allowed to optimize as if it can't happen. Both gcc and clang will lift
the division out of this loop, for example, causing the hardware trap to
happen in the "wrong" place if bar is passed zero:

void foo(int x, int y);

void bar(int x) {
for (int i = 0; i < 100; ++i) {
foo(i, 200/x);
}
}

If your program relies on the exact semantics of the 'idiv'/'udiv' hardware
instruction, you might want to use an asm volatile statement.

-Joe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130405/9c179cb4/attachment.html>


More information about the llvm-dev mailing list