On Mon, Sep 25, 2017 at 2:52 AM, Zach Laine via Boost wrote: I vote to reject Boost.DoubleEnded. Thanks for your review, I really appreciate your time. My first concern is that this library is centered around runtime
performance,
yet there appears to be no measurement done to determine what -- or even
whether -- optimization occurs when using this library instead of the
standard
containers. I'm sure you've seen the `benchmark` documentation page, and the linked ML
thread:
https://lists.boost.org/Archives/boost/2016/02/227743.php
I wasn't able to create a convincing micro-benchmark for this container: It
is really hard for the general case. (esp. comparing to vector, which
offers only a subset of devectors features)
On the other hand, a macro-benchmark would require a use-case, which makes
it subjective again. I don't think there's a point in comparing performance
of a small vector vs standard vector, given that the developer got the
`small` size right. There are no standard containers for which capacity() == 16 implies that
when there are three elements, an allocation will occur when I add a
fourth. That's substantially odd. It means that as a user I must know
about extrinsic state information (how many pushes were to the front or
back) in order to predict memory usage and the number of allocations even
roughly. No need for extrinsic state info, there are the front_free_capacity() and
back_free_capacity() members. de::devector<int> reserved_devector(32, 16, de::reserve_only_tag{}); [snip] That's not a compelling use case to me. Why can't I just do this: std::vector<int> reserved_vector;
vector.reserve(48); The reserve_only_tag is a convenient shorthand I tend to miss. For that matter, is the performance of devector better than std::deque when
I
don't know the relative frequency of front- and back-pushes? It seems in
many
cases that it would be quite a bit worse. I do agree. If you have no clue, use a deque - except, in cases when you
are not supposed to use anything more complicated (see: the internal map of
segments of batch_deque is a devector), or contiguous storage is required. A custom growth policy must have an integral growth factor, but a commonly
used growth factor is 3/2. The growth policy takes the old capacity, and returns the new. new =
uint(3/2*old) is a possible implementation. I don't think float capacity
makes sense. I don't understand why should_shrink() is part of GrowthPolicy. If I'm
growing the container, why am I worried about shrinking it? Because next time you can avoid growing it. Again, from the example docs: // int sockfd = socket(...); connect(...);
de::devector<char> buffer(256, de::unsafe_uninitialized_tag{});
long recvSize = recv(sockfd, buffer.data(), buffer.size(), 0); if (recvSize >= 0)
{
buffer.unsafe_uninitialized_resize_back(recvSize);
// process contents of buffer
}
else { /* handle error */ } The unsafe resize does nothing in this case, since char has a trivial
dtor. For
nontrivially destructed types, we get RAII violations. I don't know how to
use a type that does that. One must explicitly call unsafe_uninitialized_resize_back again, after the
exact size is known, avoiding the UB. - What is your evaluation of the implementation? I did not look. - What is your evaluation of the documentation? There are a lot of aspects of the docs I found to be unclear. The Benchmarks section contains no benchmarks. There is this: "
Reference stability is an important feature of the batch_deque. Inserting
elements to the front or to the back of the sequence never invalidates any
references. Moreover, a special insert operation is provided,
stable_insert,
which takes an iterator, as a hint, and inserts a sequence of elements (not
necessary directly) before the element pointed by the hint, in a way that
it doesn't invalidate references.
" I find that very difficult to understand. It seems to be saying that
stable_insert() does not always insert where you wanted in the sequence,
because the insertion point is just a hint. I think what is intended is
that the insertion will not necessarily be contiguous (or at least I hope
that's what is intended). Perhaps the reference helps?
http://erenon.hu/double_ended/boost/double_ended/batch_deque.html#idp4037601...
Thanks,
Benedek