On Wed, May 7, 2014 at 5:36 AM, Thijs van den Berg
wrote: I did some performance test, with 4 threads -O3 and ITERATIONS = 10.000.000, and there is no performance difference between the two versions (see below). My opinion is that we should stick with a standard random engine concept I used, merge our code (we both have non-overlapping bits that are needed), and think a bit more about random function concepts at a later stage. For me a random engine is al I need.
I think my version now provides you with the 'plain old engine' concept that you're looking for. It also *allows* you to do interesting things with restart(), but you can ignore those methods if you wish..
For me that’s a good compromise, but it’s up to Steven and (perhaps some others?) to decide what he wants with the interface. In an earlier version I gave public access to counter manipulators and those were questioned because it was non-standard interface, and so I made them private.
Your version had a nice feature that mine lacked so I adoptied it: the ability to control the output type, independently of the choice of pseudo-random function. With this addition, I can produce 32-bit output from a prf that internally uses 64-bit arithmetic (or vice versa). The template is now: Thanks. I wanted to provide both 32 and 64 bit random numbers because 32 bit is still used a lot. My first implementation used a fast reinterpret_cast<> but that was non endian invariant, and so I had to fix that. I think the interface and consistent behaviour is more important than speed for boost random, and I agree with that.
I think the most enjoyable way forward would be join effort into a single submission instead of competing ones? For that you will need to fix the copyright and license. What’s your view on this? Are you doing this in corporate time or personal time?