[LLVMbugs] [Bug 1798] New: wrong integer overflow calculation on signed/ unsigned int when using optimizer
bugzilla-daemon at cs.uiuc.edu
bugzilla-daemon at cs.uiuc.edu
Tue Nov 13 23:19:16 PST 2007
http://llvm.org/bugs/show_bug.cgi?id=1798
Summary: wrong integer overflow calculation on signed/unsigned
int when using optimizer
Product: new-bugs
Version: unspecified
Platform: Macintosh
OS/Version: MacOS X
Status: NEW
Severity: normal
Priority: P2
Component: new bugs
AssignedTo: unassignedbugs at nondot.org
ReportedBy: joseph.bebel+llvm at gmail.com
CC: llvmbugs at cs.uiuc.edu
Created an attachment (id=1219)
--> (http://llvm.org/bugs/attachment.cgi?id=1219)
small test program showing the overflow
While experimenting with LLVM as a gcc-drop-in I noticed some
interesting behavior related to integer overflow. Consider the
attached test program which sums up large numbers in both signed and unsigned
ints.
Compiled using the latest llvm-g++ (2.1, mac os x universal tarball)
with no optimization, and under all levels of optimization under Apple
GCC 4.0.1 the program correctly outputs "1206807378" for both signed and
unsigned int.
However, when using llvm-g++ under any level of optimization
(-O1/2/3), it outputs instead "-940676270" (which is the above answer
minus 2^31) or "3354291026" for the unsigned sum. Always reproducible.
--
Configure bugmail: http://llvm.org/bugs/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.
More information about the llvm-bugs
mailing list