[llvm-bugs] [Bug 29041] New: Incorrect conversion from float to char to int
via llvm-bugs
llvm-bugs at lists.llvm.org
Thu Aug 18 14:48:34 PDT 2016
https://llvm.org/bugs/show_bug.cgi?id=29041
Bug ID: 29041
Summary: Incorrect conversion from float to char to int
Product: clang
Version: 3.8
Hardware: Other
OS: Linux
Status: NEW
Severity: release blocker
Priority: P
Component: Formatter
Assignee: unassignedclangbugs at nondot.org
Reporter: zlatinski at gmail.com
CC: djasper at google.com, klimek at google.com,
llvm-bugs at lists.llvm.org
Classification: Unclassified
Created attachment 16987
--> https://llvm.org/bugs/attachment.cgi?id=16987&action=edit
the build script
I work for NVIDIA and we use a lot of code that converts from float to int/char
and visa-versa. I have recently discovered a bug within the LLVM (LLVM
3.8.256229) code generator for ARM64 that does not, correctly convert float to
char to int. I have created an example code to demonstrate that. I have also
compiled the same code with gcc for a reference.
Here are two use cases:
unsigned int int_char_int_func(unsigned int inIntVal)
{
unsigned char charVal = inIntVal;
return charVal;
}
unsigned int float_char_int_func(float infloatVal)
{
unsigned char charVal = (unsigned char)infloatVal;
return charVal;
}
Both functions above must convert the input parameter to char and then return
int.
1. For the first function int_char_int_func() that does int -> char -> int
here is the clang/llvm code generated:
0: 12001c00 and w0, w0, #0xff // this is correct
4: d65f03c0 ret
here is the gcc code generated:
0: 53001c00 uxtb w0, w0 // this is correct
4: d65f03c0 ret
2. For the seconf function float_char_int_func() that does float -> char -> int
here is the clang/llvm code generated:
0: 1e380000 fcvtzs w0, s0 // this is wrong. Where is the
conversion from int to char?
4: d65f03c0 ret
here is the gcc code generated:
0: 1e390000 fcvtzu w0, s0
4: 53001c00 uxtb w0, w0 // this is correct - converting
the int to char before return
8: d65f03c0 ret
As you can see above, LLVM generates wrong code when converting from float to
char and then int.
I have, also, compiled the code to LLVM asm and it looks like it is correct,
that makes me believe the issue is somewhere at the AsmPrinter:
; Function Attrs: norecurse nounwind readnone sspstrong uwtable
define i32 @_Z17int_char_int_funcj(i32 %inIntVal) #0 !dbg !6 {
tail call void @llvm.dbg.value(metadata i32 %inIntVal, i64 0, metadata !11,
metadata !24), !dbg !25
%1 = and i32 %inIntVal, 255, !dbg !26
ret i32 %1, !dbg !27
}
; Function Attrs: norecurse nounwind readnone sspstrong uwtable
define i32 @_Z19float_char_int_funcf(float %infloatVal) #0 !dbg !13 {
tail call void @llvm.dbg.value(metadata float %infloatVal, i64 0, metadata
!18, metadata !24), !dbg !28
%1 = fptoui float %infloatVal to i8, !dbg !29
tail call void @llvm.dbg.value(metadata i8 %1, i64 0, metadata !19, metadata
!24), !dbg !30
%2 = zext i8 %1 to i32, !dbg !31
ret i32 %2, !dbg !32
}
I'm attaching the source code as well as the disassembly from the produced llvm
and object files. I'm also attaching the script that I've used to generate
those.
Thanks for your help!
Best Regards,
Tony
--
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20160818/6632c4bc/attachment.html>
More information about the llvm-bugs
mailing list