On 2015-03-13 13:38, Phil Endecott wrote:
Hi Giacomo,
No, "how it works" is the first thing I want to know! It's unlikely that you've invented something completely new... Granted... I don't think I have invented anything new, but if I had started describing the data structure (and I am not very good at it) I would have got no answers. Raw milliseconds usually do the trick, especially with C++ enthusiasts. I am writing a PDF that explains how this thing works, but basically it's a heap with a few tweaks.
Let's get straight to the benchmarks
Rather than quantified benchmarks, to start with I'd like to know the big-O complexity of the operations.
The Big-O complexity of the operations is same or worse. Lookup is O((log(n))^2), insertion/erasure is O(n), and full in-order traversal is O(n (log(n))^2) as unfortunately you can't conveniently move to the next element in the storage vector.
(Personally, I have a flat_map that does batched insertions i.e. the new items are appended individually and then merged at the end of the batch. This will have significant benefits for some workloads.)
Indeed, that's the first thing I would do but I wanted to get better insertions even when adding one element at a time, and interleaving insertions with lookups, because that's what I do in my actual usage scenario. Thank you for your time! Giacomo