Den 26-09-2017 kl. 19:29 skrev Zach Laine via Boost:
On Tue, Sep 26, 2017 at 11:32 AM, Thorsten Ottosen via Boost < boost@lists.boost.org> wrote:
Den 25-09-2017 kl. 23:13 skrev Benedek Thaler via Boost:
On Mon, Sep 25, 2017 at 2:52 AM, Zach Laine via Boost < boost@lists.boost.org
wrote:
There are no standard containers for which capacity() == 16 implies that
when there are three elements, an allocation will occur when I add a fourth. That's substantially odd. It means that as a user I must know about extrinsic state information (how many pushes were to the front or back) in order to predict memory usage and the number of allocations even roughly.
No need for extrinsic state info, there are the front_free_capacity() and back_free_capacity() members.
1. vector's capacity says nothing about whether the next push_back will allocate
That's exactly what capacity says. See my earlier post, plus 26.3.11.3/1:
size_type capacity() const noexcept; Returns: The total number of elements that the vector can hold without requiring reallocation.
It is only capacity() - size() > 0 that will tell you if the /next/ push_back needs to allocate. As Benedek said, front_free_capacity/back_free_capacity tells you explictly how many pushes you can perform before a reallocation. And these are the very functions that are used in the precondition of unsafe_push_xxx. That said, I don't think capacity is particular useful compared to front_free_capacity/back_free_capacity. OTOH, it may seem strange that you can't tell what actual contiguous buffer size actually is. At least, if capacity is going away, so should reserve. But then you have a less generic container to use with algorithms that expects those exact functions (say, a generic algorithm that expects an ContigiousBackInsertionSequence and which can be called with vector, devector etc). Do we really want to prohibit such well-defined usage?
3. we have different options here. I don't suppose you object to the fact when a small buffer is specified, then the small buffer size is also the initial capacity?
No.
So the choices we have is where to distribute the capacity between front_free_capacity and back_free_capacity:
A. divide it as evenly as possible B. maximize front_free_capacity C. maximize back_free_capacity
Currently, devector does (C) and provide means for the user to choose the capacity at both ends. What would you prefer instead?
I don't have a better alternative. I do find the current choice clunky.
Benedek, please correct me if I'm wrong, but does the implementation not precisely support the following: A. push_front on empty container: devector<int> dev; assert( dev.capacity() == 0 ); dev.push_front( 42 ); assert( dev.capacity() > 0 && dev.front_free_capacity() == dev.capacity() - 1 && dev.back_free_capacity() == 0 ); A. push_front on empty container: devector<int> dev; assert( dev.capacity() == 0 ); dev.push_back( 42 ); assert( dev.capacity() > 0 && dev.back_free_capacity() == dev.capacity() - 1 && dev.front_free_capacity() == 0 ); ? If so, do you (Zach) still feels this is clunky? Would it not be weirder if the container decided that after a push_back it would allocate space in the front? Maybe the confusion is in the constructors: devector<int> cont( back_cap, reserve_only_tag{} ); devector( front_cap, back_cap, reserve_only_tag[} ); They could be specified as: devector<int> cont( reserve_back_cap{n} ); devector<int> cont( reserve_front_cap{n} ); devector<int> cont( reserve_front_cap{n}, reserve_back_cap{m} ); ? As for the capacity when there is a small buffer present, it needs to have a default. I don't think its a good idea (in general) to start moving elements around instead of growing the buffer in the direction where the free capacity is empty. (I think I can justify that based based on examples that make the amortized O(1) complexity impossible to uphold if we do so). However, one may say, as long as you are below the capacity of the small buffer, please move objects around until the small buffer is full. Does that make sense? kind regards Thorsten