Den 15-05-2017 kl. 21:21 skrev Joaquin M López Muñoz via Boost:
El 15/05/2017 a las 19:46, Thorsten Ottosen via Boost escribió:
I think this variation of a) might perform better (code untested):
template
bool operator==( const poly_collection & x, const poly_collection & y) { typename poly_collection ::size_type s=0; const auto &mapx=x.map,&mapy=y.map; for(const auto& p:mapx){ s+=p.second.size(); auto it=mapy.find(p.first); if(it==mapy.end()?!p.second.empty():p.second!=it->second)return false; } if(s!=mapy.size())return false; return true; }
Yeah, could be.
Anyway, it is a bit surprising. Perhaps modern allocators are good at allocating same size objects closely and without much overhead ...
I think the similarity in performance between shuffled ptr_vector and shuffled base_collectin goes the other way around: once sequentiality is destroyed, it doesn't matter much whether elements lie relativley close to each other in main memory.
Ok. I guess (and there is no hurry) it will also be interesting to see for 0-1000 elements. I know its nice to have a graph that grows as a function of n, but I think the best thing would be make each data point be based on the same number of iterations. So I would use an exponential growth for n, n = 8, 16, 32 ... max_n and then run each loop x times, x being max_n / n. kind regards Thorsten