On 3/11/17 11:37 AM, John Maddock via Boost wrote:
OK, here's my review of Robert's library.
By way of something different, I've tried to not read the docs (though I ended up scanning them in the end) and just tried to use the library. Here's what I've found:
1) Using:
safe<int>(-1) & 1
I get a hard assert failure at safe_base_operations.hpp:1373. I'm assuming that this is a mistake, and the usual error handlers should be called? Bitwise or has the same issue. More tests required ;)
bit wise operators have issues both in concept and implementation which I'm in the process of addressing.
2) I'm not sure if this is an msvc issue but in:
constexpr safe_base operator~()
The static assert is triggered for *unsigned types* and not for signed. Even if this is a compiler issue, it indicates a missing test case or two.
I've been as how to address bit wise operations on signed integers. I've been thinking that they are an error which should be trapped at compile time. First of all I don't seen a use case for them and second they have portability problems. the results are arithmetically different depending on the size of the int. I can do it either way, it's really a question about what the library is meant to do - insist on arithmetical consistence and correctness - or ... what? For now I've trapped attempts to use bit wise operations on signed integers and permitted them on unsigned integers. But as you can see, i'm conflicted here. I wish we had static_warning in C++ but we don't.
3) If I assign a float to an integer, then I get the error: "conversion of integer to float loses precision" which isn't quite what you meant to say. More particularly, for float conversions I would expect:
* Conversion from a float holding an integer value that's within the range of the target type to always succeed. * Conversion from a float holding a non-integral value to conditionally succeed (with trunction) depending on the policy in effect. * Conversion *to* a float may also fail if the integer is outside the range of the float (no checking may be required if the number of bits in the integer is less than the number of bits in the float).
my implementation of this is sort of an afterthought and needs attention. It's similar to the case as above. It's not clear what I want to do - even to me. int i = 1.0 I wonder what we're really doing here. In C++/C i get it. But when I write in C++ the value 1.0 I'm mean a value which in some sense approximates 1.0. I think it's mistake. int j = 1.5 This is pretty clear to me. The fundamental premise of the library would be stated. "All expressions which change the arithmetic value are considered errors" On the other hand, I might be OK with int k = floor(1.5); I'm still sorting out how I feel about going back and forth between floats and integers.
4) Is constexpr arithmetic supposed to be supported? I tried:
constexpr boost::numeric::safe<int> j = 0; constexpr boost::numeric::safe<int> k = 3; constexpr boost::numeric::safe<int> l = j + k;
Which generates an internal compiler error for msvc, and for gcc-5.3 I get:
../../../include/safe_base_operations.hpp:332:35: error: call to non-constexpr function ‘void boost::numeric::dispatch(const boost::numeric::checked_result<R>&) [with EP = boost::numeric::throw_exception; R = int]’ dispatch
(r);
I would expect this work and be a constexpr. But I get the same result on clang. I looked into this a little bit. constexpr boost::numeric::safe<int> j = 0; isn't constexpr either because the "constexprness" is lost with literal values. For this reason I created safe_signed_literal<0> void f1(){ using namespace boost::numeric; constexpr safe<int> j = 0; constexpr safe<int> k = 3; constexpr safe<int> l = j + k; // compile error } J + k can exceed the range of int and this can only be detected at runtime. So l = j + k is not a constexpr expression. For this I created safe_signed_literal<0> to be used in constexpr boost::numeric::safe<int> j = safe_signed_literal<0>; But this doesn't quite do it because in assignment to j = the safe_signed_literal (with a range of [0,0]) get's transformed to a variable with range of [0,65535]. then j + l has to include code
Similarly if I try to add a literal to a safe<int>. Again as the tests all pass, so I'm assuming they're missing constexpr tests?
I don't think it's so much a matter of missing the tests but forgetting the limitations of constexpr.
5) What is the purpose of class safe_literal? constepxr initialization seems to work just fine without it?
safe<short> = j returns at value with a compile time range of [0,65353]. So the range of 2 * j won't fit in a short any more and things will have to be checked at runtime. But if I use safe_signed_literal<0> j; it will be rendered as a std::int8_t with a range of [0,0]. Then the range of j * j result in a safe_signed_range<0, 0, ...> which is known at compile time to never overflow so run time checking is not necessary. using expressions that start out with safe...literal will mean expressions which are calculated using the smallest types possible - faster code (usually) with no error checking required. It's unfortunate that when safe<int> is constructed with literal values, the result is not constexpr - the "constexpr" ness isn't transmitted to literal function arguments. BUT we do have safe_literal which does retain this information. The following examples should show how this works // your example - doesn't work void f1(){ using namespace boost::numeric; constexpr safe<int> j = 0; constexpr safe<int> k = 3; constexpr safe<int> l = j + k; // compile error } // damn! - still doesn't work. intermediate values lose range void f2(){ using namespace boost::numeric; constexpr safe<int> j = boost::numeric::safe_signed_literal<0>(); constexpr safe<int> k = boost::numeric::safe_signed_literal<3>(); constexpr safe<int> l = j + k; // compile error } // damn! - safe literals by default have void policies so the // binary operations require one operand to have one non-void // policy of each type. void f3(){ using namespace boost::numeric; safe_signed_literal<0> j; safe_signed_literal<3> k; constexpr auto l = safe_signed_literal<3>(); constexpr const safe<int> l2 = j + k; // fails because no policies } // joy - smooth as silk - no interrupts and constexpr expression. void f3(){ using namespace boost::numeric; safe_signed_literal<0, native, trap_exception> j; safe_signed_literal<3> k; constexpr auto l = safe_signed_literal<3>(); constexpr const safe<int> l2 = j + k; } // auto can come in handy also! void f4(){ using namespace boost::numeric; constexpr auto j = safe_signed_literal<0, native, trap_exception>(); constexpr auto k = safe_signed_literal<3>(); constexpr auto l = j + k; }
6) In the docs, the example of using error handler dispatch is nonesense (top of page 41)?
need more context to find this.
7) Related to the above, I see the need to be able to define your own primitives which behave just like yours.
The example I have is for bit-scan-left/right which has as precondition that the argument is non-zero, but there are other bit-fiddling operations (bit count etc) which some folks may require. A "cookbook" example would be good.
I used to have a lot of these. I boiled it down to a few that I actually use in "utility.hpp" I think that there's a case for as small boost library if type utilities but I don't think this is the place for it.
8) In the tests, you have an undocumented function "log" in the library. Not sure if this is intended to be part of the API or not, but IMO it's misnamed. ilog2 seems to a semi-standard name for this.
it's an implementation detail. Stephen has suggested that I use a different name for this and I'm inclined to use bit_count.
9) By way of testing the libraries performance I hooked it into the Millar-Rabin primality test in the multiprecision lib (test attached, it counts how many probable primes there are below 30000000), here are the times for msvc:
Testing type unsigned: time = 17.6503 seconds count = 1857858 Testing type safe<unsigned>: time = 83.5055 seconds count = 1857858 Testing type 32-bit cpp_int: time = 36.9026 seconds count = 1857858
and for GCC-MINGW-5.3.0:
Testing type unsigned: time = 17.1037 seconds count = 1857858 Testing type unsigned: time = 18.4377 seconds count = 1857858 Testing type unsigned: time = 21.0592 seconds count = 1857858
I'm surprised by the difference in times between msvc and gcc, also deeply impressed that there is effectively no performance penalty with GCC (I was expecting there should be some!)
Sounds too good to be true. Are you sure you tested it correctly - it says type unsigned for the second test - I expect to see safe<unsigned>. similar for the last test. I'm not sure what cpp_int means - is the promotion policy? If you had nothing else to do you could tweak the code a little to use safe_..._range and/or safe_literal - maybe even automatic and constexpr where possible. I would expect/hope this would yield a measurably faster program. I have no idea how much effort this would consume. BTY the "trap" exception policy is fun to experiment with - Really.
10) I don't find the scope of the library over-broad, in fact I probably find it about right wrt the policies etc.
11) There were a few traits and such that it would be nice to have, the problem as ever, is what to call them and where to put them:
* A trait for extracting the underlying type of a safe<> - use case for example where you enable_if a function on is_safe so don't have the template argument list to safe<>.
I do have an is_safe which is important but used only internally
* Do we need a function for extracting the underlying type of a safe<>?
base_type<T>::type does the job - used but not documented.
Or is casting to the underlying type optimal?
it is optimal. But you might not always know what the underlying type
is. See safe_..._range
Actually, maybe having a function would negate the need for the above trait, as we could just write 'auto x = mysafe.value();' ?
there is also template< class T, T Min, T Max, class P, class E
constexpr T base_value(
const safe_base
* Do we need traits equivalent to is_signed/is_unsigned/make_signed/make_unsigned? My gut feeling is yes we do, the issue is what to call them, and where to put them - we're not allowed to specialize the type_traits versions (either boost or std), but we would need a central "one true trait" version that can be specialized by other libraries as well (Multiprecision for example). We've backed ourselves into a corner over this I fear!
These are used extensively internally. I've specialized numeric_limits
for this purpose.
auto max = std::numeric_limits
12) When calling boost::integer::gcd/lcm the sub-optimal(?) non-binary versions are called for safe<> - you might want to look into seeing what traits need to be defined to ensure that the same version as the underlying type is called.
I'll have a look
Those are all the issues I can see for now... in conclusion, as for whether the library should be accepted, yes I believe it should be, modulo the issues above.
Regards, John.