Den 05-10-2017 kl. 10:20 skrev Thorsten Ottosen via Boost:
Den 05-10-2017 kl. 04:19 skrev degski via Boost:
On 4 October 2017 at 23:48, Benedek Thaler via Boost
wrote: As expected, both devector and batch_deque highly outperforms std::vector and queue in deserialization.
This sounds (to me) counterintuitive. Why is this the case?
Because they can avoid double initialization for some types. When using std::vector, you have to either
A. reserve + load item to local object + push_back B. resize + delegate to bulk operation in Boost.Serialization
The bulk operation is to prefer, but then requires redundant initialization in resize.
I gave this a little thought, and must admit that the numbers cannot be
explained by double initialization alone.
Looking at the serialization code for DoubleEnded,
it think may not be quite in line with the Boost.Serialization approach.
For example, it seems to be me that it is not working correctly with
e.g. xml archives.
Basically, the implementation for trivial types avoids all the
Boost.Serialization machinery by calling load_binary/save_binary directly.
Now, what I don't understand is how that machinery can be so costly for
the binary archive case to get the huge difference we see with
std::vector. This is a real mystery (In the same way the save test got
much slower when the type became array