[cfe-commits] [Review] rollingout ASTContext::getTypeSizeInBytes()
rjmccall at apple.com
Wed Nov 25 13:08:57 PST 2009
Ken Dyck wrote:
> On Tuesday, November 24, 2009 5:13 PM, John McCall wrote:
>> Ted Kremenek wrote:
>>> I agree with John. If we're rolling out a new API why not
>>> now a new type that adds the "dimensionality" of the value? (bytes
>>> versus bits)
>>> John: Is that what you meant?
>> Yeah. I am fairly skeptical of this feature surviving
>> correctly otherwise.
> Okay. I'll take a stab at implementing this.
> As I understand it, these are the changes that need to happen:
> 1. Remove the recently added ASTContext::getByteSize() and
> 2. Add a new class (named TypeSize, say) that has two methods: inBits()
> and inBytes().
> 3. Change the return type of ASTContext::getTypeSize() from uint64_t to
> 4. Update all calls to getTypeSize() to explicity choose the bit or byte
> Is this what you had in mind?
That would actually be quite expensive, since every instance would need
to somehow know how many bits were in a byte.
I was thinking more along the lines of a type that represented a
size/offset in bytes; call it bytes_t for the sake of argument.
Anything that returned or required a size in bytes would take one of
these instead of a raw uint64_t. It would basically wrap a uint64_t,
but wouldn't allow implicit conversion to or from any integer type, and
it would have overloaded operators to do most of the common operations
on the type, e.g.
bytes_t operator+(bytes_t, bytes_t);
bytes_t operator*(bytes_t, uint64_t);
The idea being to make it more difficult to accidentally encode
byte-size assumptions, as well as to make code more implicitly
In theory we might also want a bits_t, but that's more awkward without
support from LLVM.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the cfe-commits