The input data distribution was evenly distributed 32-bit random data. The absolute worst-case distribution I've been able to design for spreadsort is binaryalrbreaker with an ALR_THRESHOLD of 3. With 32-bit integers, it has 512 elements, and takes 15 microseconds for integer_sort, and 14 microseconds for std::sort. With 64-bit integers, it has 131072 elements, and takes 2.8ms for integer_sort, and 1.6ms for std::sort. If you add or subtract any elements, it won't be the worst-case distribution! If I simply double the number of elements, 64-bit integer_sort takes 5.35ms, and std::sort takes 4.56ms. I'm not sure what kind of graph would make sense with this type of situation. I'd also like to note that this distribution is specifically designed to make integer_sort slow, and cannot be scaled up without improving the relative speed of integer_sort.
You've lost me here. Surely, for any N there is input that results in worst-case performance, and it should be possible to generate such input?
The worst-case input that I've been able to devise for spreadsort
consists of data structured so that: 1) It uses the full width of the key, forcing radix sorting to use as many iterations as possible, maximizing overhead and the number of swaps. 2) Each radix iteration only cuts the problem in half, and involves swapping all the elements. Swaps are the slowest part of this algorithm, so maximizing the number of swaps is important. Using restrictions 1 and 2, I get a minimum data size of 512 for 32-bit data and 131072 for 64-bit data (my binaryalrbreaker example does this if you set ALR_THRESHOLD to 3). The only way to increase size above that is to have more elements at the very bottom of the distribution, which amortizes the radix costs over more elements and rapidly makes the algorithm relatively more efficient compared to std::sort. Below that it's just not the worst-case input.