1) An implicit conversion lets you assign values such as 0.1 to a rational (which actually leads to 3602879701896397/36028797018963968 not /10), ...
If we consider the case of cpp_dec_float and cpp_bin_float,
I believe we allow the implicit conversion from built-in floating-point
types to the multiple-precision floating point types.
This means that the user must be astute and keenly aware of
what is going on in the class. If the floating-point approximation
of 0.1 is desired, then the argument (0.1) is used. If the exact
value of 1/10 is required, then the quoted argument of ("0.1")
is required --- or creation from 1 and subsequent division
by 10.
We discussed this point a few times during the development
of Boost.Multiprecision. As far as I recall, we opted for implicit
conversions from the built-in floating-point type to the
multiple-precision floating-point type. And I think we did this
in a conscious fashion.
Wouldn't it be consistent to have the same behavior for
built-in float to multiple-precision rational? If so, then one
would opt for implicit conversions?
Cheers, Chris.
On Saturday, May 31, 2014 2:37 PM, Marc Glisse
Folks,
I have an open bug report https://svn.boost.org/trac/boost/ticket/10082 that requests that conversions from floating point to rational multiprecision-types be made implicit (currently they're explicit).
But the doc says they are implicit.
Now on the one hand the bug report is correct: these are non-lossy conversions, so there's no harm in them being implicit. However, it still sort of feels wrong to me, the only arguments against I can come up with are:
1) An implicit conversion lets you assign values such as 0.1 to a rational (which actually leads to 3602879701896397/36028797018963968 not 1/10), where as making the conversion explicit at least forces you to use a cast (or an explicit construction).
The problem is with people writing 0.1. If they mean the exact value, they have already lost. What Boost.Multiprecision does later is not that relevant, and it seems wrong to me to penalize a library because some users don't understand the basics (and their program is likely broken for a number of other reasons).
2) Floating point values can result in arbitrarily large integer parts to the rational, effectively running the machine out of memory. Arguably the converting constructor should guard against that, though frankly exactly how is less clear :-(
Er, that might be true if you include mpfr numbers in "floating point", but if you only consider double, the maximum size of the numerator is extremely limited. Even for a binary128 it can't be very big (about 2ko). There could be good reasons for not making it implicit, but I am not convinced by these two. -- Marc Glisse _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost