[LLVMdev] 8-bit DIV IR irregularities

Nowicki, Tyler tyler.nowicki at intel.com
Wed Jun 27 16:02:25 PDT 2012


Hi,

I noticed that when dividing with signed 8-bit values the IR uses a 32-bit signed divide, however, when unsigned 8-bit values are used the IR uses an 8-bit unsigned divide. Why not use a 8-bit signed divide when using 8-bit signed values?

Here is the C code and IR:

char idiv8(char a, char b)
{
  char c = a / b;
  return c;
}

define signext i8 @idiv8(i8 signext %a, i8 signext %b) nounwind readnone {
entry:
  %conv = sext i8 %a to i32
  %conv1 = sext i8 %b to i32
  %div = sdiv i32 %conv, %conv1
  %conv2 = trunc i32 %div to i8
  ret i8 %conv2
}

unsigned char div8(unsigned char a, unsigned char b)
{
  unsigned char c = a / b;
  return c;
}

define zeroext i8 @div8(i8 zeroext %a, i8 zeroext %b) nounwind readnone {
entry:
  %div3 = udiv i8 %a, %b
  ret i8 %div3
}

I noticed the same behavior in O3. The command line arguments I'm using for clang are: -O2 -emit-llvm -S.

Tyler Nowicki
Intel

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20120627/e47dfdcd/attachment.html>


More information about the llvm-dev mailing list