John, I'm going through all the issues raised in the review of the safe numerics library. I have to say I thought I was prepared but it turns out that there have been many more problems than I realized. I actually wonder how it got accepted. I presume its' because no one really understands it. Anyway there are a couple of points which are interesting to expand upon a little. I'm sending this via private email since I don't know that it's that interesting for the list On 3/11/17 11:37 AM, John Maddock via Boost wrote:
OK, here's my review of Robert's library.
By way of something different, I've tried to not read the docs (though I ended up scanning them in the end) and just tried to use the library. Here's what I've found:
1) Using:
safe<int>(-1) & 1 constexpr safe_base operator~()
The static assert is triggered for *unsigned types* and not for signed. Even if this is a compiler issue, it indicates a missing test case or two. The whole issue of bit wise operators has had to be rethought and reimplemented. I think it's better now
3) If I assign a float to an integer, then I get the error: "conversion of integer to float loses precision" which isn't quite what you meant to say. More particularly, for float conversions I would expect:
* Conversion from a float holding an integer value that's within the range of the target type to always succeed. * Conversion from a float holding a non-integral value to conditionally succeed (with trunction) depending on the policy in effect. * Conversion *to* a float may also fail if the integer is outside the range of the float (no checking may be required if the number of bits in the integer is less than the number of bits in the float). I'm still not decided what behavior I want. There is tons of code like float f = 1 and int j = 1.5f which I personally don't like and would like to inhibit. I'm still going back and forth on this. 4) Is constexpr arithmetic supposed to be supported? I tried:
constexpr boost::numeric::safe<int> j = 0; constexpr boost::numeric::safe<int> k = 3; constexpr boost::numeric::safe<int> l = j + k; constexpr IS fully supported. Similarly if I try to add a literal to a safe<int>. Again as the tests all pass, so I'm assuming they're missing constexpr tests? Hmmm - not understanding this. constexpr safe
x = 120; // includes runtime check that 120 < 127 5) What is the purpose of class safe_literal? constepxr initialization seems to work just fine without it? Right.
6) In the docs, the example of using error handler dispatch is nonesense (top of page 41)? right - fixed the whole exception functionality which solved a lot of
7) Related to the above, I see the need to be able to define your own primitives which behave just like yours. The example I have is for bit-scan-left/right which has as precondition that the argument is non-zero, but there are other bit-fiddling operations (bit count etc) which some folks may require. A "cookbook" example would be good. Hmmm - are you referring to the functions in utility.hpp ? I conjured up/stole/borrowed from multiple places which In now don't recall. I added them "as needed" I can't see anyone needed these in order to use
But there are differences:
a) safe_unsigned_literal<n> is more like a safe_unsigned_range
8) In the tests, you have an undocumented function "log" in the library. Not sure if this is intended to be part of the API or not, but IMO it's misnamed. ilog2 seems to a semi-standard name for this.
Hmmm - the boost version file:///Users/robertramey/WorkingProjects/modular-boost/libs/integer/doc/html/boost_integer/log2.html is called static_log2. But I'm not wedded to the name. I like my version as it's a lot faster to compile
9) By way of testing the libraries performance I hooked it into the Millar-Rabin primality test in the multiprecision lib (test attached, it counts how many probable primes there are below 30000000), here are the times for msvc:
Testing type unsigned: time = 17.6503 seconds count = 1857858 Testing type safe<unsigned>: time = 83.5055 seconds count = 1857858 Testing type 32-bit cpp_int: time = 36.9026 seconds count = 1857858
and for GCC-MINGW-5.3.0:
Testing type unsigned: time = 17.1037 seconds count = 1857858 Testing type unsigned: time = 18.4377 seconds count = 1857858
// the above this sort of unbelievable to me. I notice that the title of the second test doesn't include the word "safe" so maybe there is mistake. - I'm still investigating
Testing type unsigned: time = 21.0592 seconds count = 1857858 // this also looks a little too good to be true - it also doesn't say safe. I'm surprised by the difference in times between msvc and gcc, also deeply impressed that there is effectively no performance penalty with GCC (I was expecting there should be some!) I agree - the whole thing is sort of suspcious. Here are my times on my apple machine - Mac mini (Late 2012) 2.3 GHz Intel Core i7
10) I don't find the scope of the library over-broad, in fact I probably find it about right wrt the policies etc.
11) There were a few traits and such that it would be nice to have, the problem as ever, is what to call them and where to put them:
* A trait for extracting the underlying type of a safe<> - use case for example where you enable_if a function on is_safe so don't have the template argument list to safe<>. This is base_type<T>::type. It's described in the SafeNumeric concept
* Do we need a function for extracting the underlying type of a safe<>? Or is casting to the underlying type optimal? casting to the underlying type is a zero cost operation. it just returns a reference to the value Actually, maybe having a function would negate the need for the above trait, as we could just write 'auto x = mysafe.value();' ?
Testing type unsigned: time = 16.775 count = 1857858 Testing type safe<unsigned>: time = 36.6576 count = 1857858 Testing type 32-bit cpp_int: time = 41.4232 count = 1857858 Program ended with exit code: 0 These time seem to fall in between yours. It's sort of amazing that the unsigned versions all report the almost exact same time part of the manual there also exists base_value(s) which returns the value. Also described in SafeNumeric Concept.
* Do we need traits equivalent to is_signed/is_unsigned/make_signed/make_unsigned? My gut feeling is yes we do, the issue is what to call them, and where to put them - we're not allowed to specialize the type_traits versions (either boost or std), but we would need a central "one true trait" version that can be specialized by other libraries as well (Multiprecision for example). We've backed ourselves into a corner over this I fear! I addressed this by using numeric_limits. In the safe library, I don't use std::is_signed. I use std::numeric_limits>::is_signed. This works for safe types because I specialize numeric_limits for all safe types.
In the "backend" which handles the primitive operations "checked_result" (outcome if you will). I use only the std::is_signed::value since the back end only handles the primitive operations and not the safe ones. I'm not convinced of the utility of make_signed in this context. In safe numerics, any time we want to change the type of something, we have to be sure we don't change it's arithmetic value. We'll get either a runtime or compile time error if we do. So for this static_cast<unsigned int>(x) where x is signed will work and be checked at compile time if possible, runtime other wise. Note that casting is constexpr so it can be done a compile time and still work.
12) When calling boost::integer::gcd/lcm the sub-optimal(?) non-binary versions are called for safe<> - you might want to look into seeing what traits need to be defined to ensure that the same version as the underlying type is called. I need more help understanding this.
safe<int> will be implicitly converted to any other integer type. This of course is blessing and curse. I've gone back and forth on this. Now I've figured out what I want to do. I'm going to leave the implicit conversion in, but permit the error policy to trap it a compile time. So I'll be creating an "explicit_static_cast<T>(t)" for those what want to trap the implicit ones while permitting the explicit ones. The situation I'm addressing is where one unintentionally looses the "safe" attribute when passing as a safe value to an unsafe integer type. You'll
Those are all the issues I can see for now... in conclusion, as for whether the library should be accepted, yes I believe it should be, modulo the issues above.
Regards, John.
--- This email has been checked for viruses by AVG. http://www.avg.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Robert Ramey www.rrsd.com (805)569-3793