<aside> You're using the old interface to uniform_01 here, which is deprecated because it is inconsistent with the rest of the distributions. </aside>
Understood. However there's nothing in the docs to say it's deprecated. In any case I could have picked any real-valued distribution for the example.
Now my concern is that we're taking a 32-bit random integer and "stretching" it to a floating point type with rather more bits (53 for a double, maybe 113 for a long double, even more in the multi-precision world). So quantization effects will mean that there are many values which can never be generated.
It's true that I could use independent_bits_engine to gang together multiple random values and then pass that to uniform_01, however that supposes we have an unsigned integer type available with enough bits. cpp_int from boost.multiprecision would do it, and this does work, but the conversions involved aren't particularly cheap. It occurs to me that an equivalent to independent_bit_engine but for floating point types could be much more efficient - especially in the binary floating point case.
It's called generate_canonical. Ah, good.
However, I don't see it in the docs anywhere? Ah... it's not listed in the docs Jamfile so it's not built in. My guess is that no one else has noticed it either?
So I guess my questions are:
Am I worrying unnecessarily? and I don't think so. I haven't worried about it much because, as Thijs points out, using a 64-bit engine works well enough for float and double, which accounts for most use cases. For multiprecision, it could be an issue.
What is best practice in this area anyway?
I really don't know.
Looks like "use generate_canonical" might be the answer? John.