[cfe-dev] libc++: max_size() of a std::vector
François Fayard
fayard.francois at icloud.com
Tue Feb 17 23:35:09 PST 2015
Hi Matt,
You seem to favour the usage of std::size_t because it does not "limit the range of values". My original post shows that the usage of std::size_t does not even allow std::vector to have a size() >= PTRDIFF_MAX in all the 3 major standard library implementation.
- Just run the following code compiled on a 32 bit system (or with -m32) with libc++
auto v = std::vector<char>( );
std::cout << PTRDIFF_MAX << " " << SIZE_MAX << " " << v.max_size() << std::endl;
and you’ll find out that v.max_size() is equal to PTRDIFF_MAX and not SIZE_MAX.
- libstdc++ and the Windows standard library implementation returns SIZE_MAX and it you look at my original post you’ll understand that this is a bug (size() has undefined behaviour if the size is > PTRDIFF_MAX). This bug has been in their standard library implementation for 20 years!
François
> On 18 Feb 2015, at 04:09, Matt Calabrese <rivorus at gmail.com> wrote:
>
> 2) You get another bit and therefore you can address more memory.
> - Maybe, for the people who needs to work with arrays of chars of more than 2 GB on 32 bit systems...
>
> This genuinely can be a problem and I think it's reasonable on its own as a rationale for unsigned, though really it's the artificial change in the range of values if you were to adopt a signed type that is much more upsetting to me, personally, even though it's much more ideological.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20150218/6370b90b/attachment.html>
More information about the cfe-dev
mailing list