On 22/09/2015 18:41, Phil Endecott wrote:
Have you considered making this general-purpose by adding a feature to vector that exports an allocator, or a memory_resource, that could be used by any code that needs a temporary buffer while it can guarantee not to cause the vector to expand? You could then modify inplace_merge, stable_sort etc. to take such an allocator to use instead of get_temporary_buffer. And you could choose whether or not to use a fallback, i.e. get_temporary_buffer, if the spare space in the vector is insufficient.
Interesting. Could you please elaborate the basic ideas of this approach? * * * I think there are some "guarantees" or "expected behaviour" when inserting to a flat container: -> The standard insertion function, IMHO, should guarantee that if capacity() - size() is bigger than the range to be inserted, vector won't be reallocated (after all, capacity() should mean something ;-)). -> About using temporary storage (except the stack) in a method of a container: I haven't seen it in any STL implementation. Even std::list::sort, which maybe could be optimized using a temporary storage offering random-access iterators, is always implemented using temporary lists and merging them. Other flat container implementations like Folly or Loki don't use any temporary storage.
(I'd want to see numbers convincing me of the practical usefulness of this, though; I've worked on plenty of memory-constrained projects and I've never been desperate enough to resort to this sort of thing!)
It's a bit strange, I know. According to the standard a container should allocate all memory from the allocator, I don't know if it should include temporary storage, I can't find anything explicit in the standard. The issue is that we have no STL container that does something similar, it's hard to know what a user should expect. In any case, allocating temporary storage from the allocator or new/malloc in every range insertion sounds like a very good method to fragment memory.
For already sorted inputs and multikey containers, when size()+std::distance(begin, end) > capacity(),
I think you mean < capacity(), right?
No, sorry. I think I didn't explained it very well. When the current remaining capacity (capacity() - size()) is less than the size of range to be inserted (that is, the vector has to be reallocated because new elements don't fit in the capacity), and the input is sorted (ordered_range_t overloads), then a new buffer is allocated and std::merge() can be used to achieve a O(N) range insertion.
Not yet, but if known algorithms are used, the difference between inplace_sort|merge and sort|merge is theoretically big. Merge becomes O(N) instead of O(NlogN). Sort becomes O(logN) instead of O(N(logN)^2).
I think you're misusing the term "inplace" in that paragraph. I think you mean "with no additional memory". "inplace" just means that the result ends up back where the input was. All std:: sorts are inplace.
Yes, you are right, let's call them "no additional memory" methods.
std::sort is never O(N(logN)^2) (or O(logN), but I presume that's a typo!). std::stable_sort is allowed be, but see below.
Sorry if I didn't explain it clearly: I'm always talking about a stable sort, I think it's the only alternative for flat containers. Even with flat_set/map (unique keys), only duplicates that are inserted after the first occurrence of a key shall be deleted, as it would happen if a one-by-one loop is used to insert a range.
Right, stable sorts that are "almost" O(N log N) are known; they work with blocks of size sqrt(N). I say "almost" because they have problems when more than sqrt(N) items are equal to each other. I'm not sure if that is solved. But these algorithms tend to have a constant factor of maybe 5 or 7 compared to unstable sorts or sorts that use extra memory, and I don't think I've ever seen one used anywhere. I guess it's because of this that std::stable_sort is not required to be O(N log N).
Thanks. It seems that some implementations have improved that constant factor. At least some benchmarks against std::stable_sort in the following links tell us that they might quite competitive: https://github.com/Mrrl/GrailSort https://github.com/BonzaiThePenguin/WikiSort According to "Fast Stable Merging and Sorting in Constant Extra Space'" (http://comjnl.oxfordjournals.org/content/35/6/643.full.pdf), chapter "5.3 Constant of Proportionality Bounds": "We conclude that we are guaranteed a worst-case key-comparison and record-exchange grand total not greater than 2.5N*log2N" "This worst-case total compares favourably with average-case key-comparison and record-exchange totals for popular unstable methods: quick-sort's average-case figure is a little more than 1.4Nlog2N; heap-sort's is about 2.3Nlog2N. (These values are derived from the analysis in Ref. 14, where we count a single record movement at one third the cost of a two-record exchange.) Not sure if those algorithms are properly tested or benchmarked. Best, Ion