[cfe-dev] libc++: max_size() of a std::vector

Mikael Persson mikael.s.persson at gmail.com
Tue Feb 17 13:55:51 PST 2015


Hi,

I think this problem is even worse than you suspect François. Even if the
static-cast to size_t has the "expected result" (that the unsigned value
turns out correct even if it went through an overflowing operation on the
ptrdiff_t), which I agree will probably happen on nearly all platforms...
this is not the worst problem.

The worst problem is that the max-size function is used internally by the
vector to determine if an increase in capacity is possible. This means that
the vector would accept a resize operation that makes the size exceed
PTRDIFF_MAX, and at that point, if you use the iterators in any algorithm
or code that takes a difference between them (e.g., std::sort), you will
get a negative difference_type value (overflown and still interpreted as a
signed value) between those iterators, which will lead to complete disaster
on all but the most pedantic / defensive code out there (how many
algorithms do you think check if the (last - first) difference is negative
or overflows? probably very few).

The real issue is that if max-size is supposed to regulate how large a
vector can be in practice (and that's exactly what the standard requires
this value to represent), then resizing a vector to that size should
produce a "perfectly good" vector in the sense that all operations with it
(incl. algorithms on random-access iterators obtained from that vector)
should be well-behaved. And if max-size is allowed to exceed PTRDIFF_MAX,
this is simply not the case, because nearly everything you could do with a
vector larger than PTRDIFF_MAX is undefined behaviour.

Cheers,
Mikael.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20150217/88a1c429/attachment.html>


More information about the cfe-dev mailing list