On 05/10/2017 19:02, Thorsten Ottosen via Boost wrote:
I understand the idea to avoid zero initialization for buffers, but what's the use case for types with non-trivial constructors? The documentation shows a socket example, and that's covered ith default_init_t.
Well, I don't know if my compiler is wrong, but vc 2017 says:
class IntClass { int i; public: IntClass() : i(0) {} IntClass( int i ) : i( i ) {} };
is trivially copyable (hence we can copy arrays of it as fast as arrays of ints). But it will be initialized with zero if I use default_init_t, right?
Right.
I do agree that reserve + unchecked_emplace_back() should be working well for types that are /not/ trivially copyable (also better in terms of exception-safety). The unitialized_resize idea is probably before we had emplace and so at that time there was no efficient/easy way to construct an object inplace.
Ok, understood
If reserve is not an alias for reserve_back, the I think we must remove it.
reserve() means that a container won't reallocate if size() < capacity(). Where memory is reserved is irrelevant for the user.
I was thinking along generic code along the line
template< class BacKInsertionContiguousContainer > void foo( BacKInsertionContiguousContainer& cont ) { cont.reserve( source.size() ); for( ... ) cont.push_back( ... );
}
why should that not work with both devector and vector?
It would work, but it would not be as efficient as vector.
This is tricky. I think the best is to treat both ends independently. If you start moving things around, you can break the O(1) amortized guarantee by interleaving push_front/push_back.
What do you mean by "treat both ends independtly"?
I mean that buffer extensions at either end should never affect each other. That is, a push_back can never remove free capacity from the front, a push_front can never remove free capacity from the back.
Ok. I still have no clear idea which allocation strategy would be the "natural" one. I've found this blog with a similar discussion and alternatives: http://larshagencpp.github.io/blog/2016/05/22/devector also linked from the blog post: https://github.com/orlp/devector Moving elements just one step when one side is full and the other side has capacity has bad quadratic behavior while current capacity is being exhausted and we continue inserting in the same side. Moving elements to the middle of the buffer seems a simple operation that reduces this to something closer to O(NlogN). In any case, I will always bad insertion sequence that hurts our strategy, so optimizing the common case seems the way to go. Best, Ion