El 16/05/2017 a las 11:11, Thorsten Ottosen via Boost escribió:
Den 15-05-2017 kl. 21:21 skrev Joaquin M López Muñoz via Boost:
I think the similarity in performance between shuffled ptr_vector and shuffled base_collectin goes the other way around: once sequentiality is destroyed, it doesn't matter much whether elements lie relativley close to each other in main memory.
Ok. I guess (and there is no hurry) it will also be interesting to see for 0-1000 elements.
In my todo list.
I know its nice to have a graph that grows as a function of n, but I think the best thing would be make each data point be based on the same number of iterations. So I would use an exponential growth for n, n = 8, 16, 32 ... max_n and then run each loop x times, x being max_n / n.
Not sure why this is better than having homogeneous units all across the
plot
(namely nanoseconds/element). In any case the testing utilities sort of
do the
loop repetition the way you suggest, at least for small values of n:
measure_start=high_resolution_clock::now();
do{
res=f();
++runs;
t2=high_resolution_clock::now();
}while(t2-measure_start