[LLVMbugs] [Bug 7931] New: __floatsidf output is wrong with INT_MIN as input
bugzilla-daemon at llvm.org
bugzilla-daemon at llvm.org
Tue Aug 17 11:10:45 PDT 2010
http://llvm.org/bugs/show_bug.cgi?id=7931
Summary: __floatsidf output is wrong with INT_MIN as input
Product: compiler-rt
Version: unspecified
Platform: PC
OS/Version: Linux
Status: NEW
Severity: normal
Priority: P
Component: compiler-rt
AssignedTo: unassignedbugs at nondot.org
ReportedBy: quickslyver at free.fr
CC: llvmbugs at cs.uiuc.edu
$cat test.c
#include <assert.h>
#include <stdio.h>
#include <limits.h>
double __floatsidf(int);
int main()
{
double b=(double)INT_MIN;
double a=__floatsidf(INT_MIN);
printf("a=%f\n",a);
printf("b=%f\n",b);
printf("a=%x%x\n",*((int*)&a+1),*(int*)&a);
printf("b=%x%x\n",*((int*)&b+1),*(int*)&b);
assert(a==b);
}
---------------------------------
$./a.out
a=-536870912.000000
b=-2147483648.000000
a=c1c000000
b=c1e000000
a.out: test2.c:14: main: Assertion `a==b' failed.
Aborted
--
Configure bugmail: http://llvm.org/bugs/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.
More information about the llvm-bugs
mailing list