[LLVMdev] Integer divide by zero

Cameron McInally cameron.mcinally at nyu.edu
Mon Apr 8 11:07:48 PDT 2013


Hey again,

I have a nagging thought that my past comments may not have enough meat to
them. So...

On Mon, Apr 8, 2013 at 12:01 PM, Cameron McInally
<cameron.mcinally at nyu.edu>wrote:
...

> I was just writing Chandler about a similar implementation. With my
> current understanding of the problem, my constant division will end up as
> either a trap call or a load of an undefined value in the IR, depending on
> what the FE chooses to do. It's not clear to me, at least with my current
> knowledge of LLVM, how to recreate the actual division instruction in the
> target backend from that information (trap or undefined value). Hopefully
> there is a way. ;)
>

I've tried my little test case with an unmodified Clang
and -fsanitize=integer-divide-by-zero. After LLVM runs the Function
Integration/Inlining Pass, I end up with this IR:

*** IR Dump Before Deduce function attributes ***
define i32 @main() #0 {
entry:
  call void @__ubsan_handle_divrem_overflow(i8* bitcast ({ { i8*, i32, i32
}, { i16, i16, [6 x i8] }* }* @2 to i8*), i64 6, i64 0) #3
  %call1 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([9 x
i8]* @.str, i64 0, i64 0), i32 undef) #3
  ret i32 0
}

The problem I'm facing is how to safely convert this
__ubsan_handle_divrem_overflow call back to the actual integer division in
the target backend. Is it possible and I'm missing it? Any insight would be
warmly welcomed.

On a related note, this investigation into Clang has proven insightful. My
compiler's usage of LLVM differs from Clang's in many ways. So, it could be
that the issues I face are peculiar to my compiler. Without delving, my
input LLVM IR is heavily optimized.

-Cameron
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130408/e0848c6c/attachment.html>


More information about the llvm-dev mailing list