[LLVMbugs] [Bug 8980] New: clang -O3 generates horrible code for std::bitset
bugzilla-daemon at llvm.org
bugzilla-daemon at llvm.org
Sat Jan 15 07:55:43 PST 2011
http://llvm.org/bugs/show_bug.cgi?id=8980
Summary: clang -O3 generates horrible code for std::bitset
Product: libraries
Version: trunk
Platform: PC
OS/Version: All
Status: NEW
Severity: enhancement
Priority: P
Component: Scalar Optimizations
AssignedTo: unassignedbugs at nondot.org
ReportedBy: benny.kra at gmail.com
CC: llvmbugs at cs.uiuc.edu
Created an attachment (id=6006)
--> (http://llvm.org/bugs/attachment.cgi?id=6006)
IR output from clang -O3
for this piece of c++
===
#include <bitset>
bool foo(unsigned *a, size_t asize, unsigned *b, size_t bsize) {
std::bitset<32> bits;
for (unsigned i = 0; i != asize; ++i)
bits[a[i]] = true;
for (unsigned i = 0; i != bsize; ++i)
if (bits[b[i]])
return true;
return false;
}
===
clang -O3 (with libstdc++ 4.2) generates much worse IR for the first loop than
llvm-gcc -O3 does. It somehow manages to split the computation for each word
into 3x and,or,and with large constants (or 7x with 64 bit words).
--
Configure bugmail: http://llvm.org/bugs/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.
More information about the llvm-bugs
mailing list