[llvm-commits] Fwd: [cfe-commits] r123720 - in /cfe/trunk/lib/CodeGen: CGDecl.cpp CodeGenModule.cpp
Ken Dyck
kd at kendyck.com
Tue Jan 18 09:32:58 PST 2011
On Tue, Jan 18, 2011 at 12:38 AM, John McCall <rjmccall at apple.com> wrote:
> On Jan 17, 2011, at 6:01 PM, Ken Dyck wrote:
>> Author: kjdyck
>> Date: Mon Jan 17 20:01:14 2011
>> New Revision: 123720
>>
>> URL: http://llvm.org/viewvc/llvm-project?rev=123720&view=rev
>> Log:
>> Replace calls to CharUnits::fromQuantity() with ones to
>> ASTContext::toCharUnitsFromBits() when converting from bit sizes to char units.
>
> Is there any good reason why TargetInfo and ASTContext compute size and
> alignment in bits rather than chars in the first place? I think most consumers of
> this information want it in chars, and for the ones that want bits, well, a multiply
> is a lot cheaper than a divide.
One potential snag is the special case in ASTContext::getTypeInfo()
where the type is an enum or record with an invalid declaration, in
which case it returns a width and alignment of 1 bit.
//...
case Type::Record:
case Type::Enum: {
const TagType *TT = cast<TagType>(T);
if (TT->getDecl()->isInvalidDecl()) {
Width = 1;
Align = 1;
break;
}
if (const EnumType *ET = dyn_cast<EnumType>(TT))
return getTypeInfo(ET->getDecl()->getIntegerType());
//...
I agree with you, though. It would probably avoid a lot of unnecessary
divisions to calculate sizes and alignments in char units instead of
bits.
-Ken
More information about the llvm-commits
mailing list