I'm reviewing this library as someone who has tried to use it in the past,
has made minor contributions to it and has a vested interest in improving
numeric support for C++ developers. However, I have very little knowledge
of the code and relatively little experience of Boost and the Boost
community. I hope my comments are helpful nevertheless.
tl;dr: make `safe<>` more like the type I describe in [
http://wg21.link/p0554#componentization-safety] or at least ensure that the
API can evolve toward this without backward-breaking changes.
----
I do feel that a safe integer type should be available to as many people as
possible and Robert's approach is on the right track. But given the choice,
I would make some significant API changes. Any of these which can be added
incrementally should not delay acceptance of safe_numerics into Boost.
The goal of making `safe<int>` a drop-in replacement for int is essential.
But further, `safe<Rep>` should be a drop-in replacement for `Rep` where
`Rep` is any integer type which exhibits "surprising" results, e.g. through
UB of overflow / divide-by-zero or indeterminate initial value. `Rep`
should not just be any type for which `std::is_integral_v<Rep>` is true and
ideally, `safe
John McFarlane wrote:
Conversion from integers to real numbers is not so much *unsafe* as *lossy*. Users intuitively understand that `int n = 0.5;` will not define a variable with value 0.5 and this is often exactly what they intend, e.g.: [...]
Converting to int a floating-point value that (absent the fractional part) cannot be represented in int is undefined behavior, so it is, in fact, unsafe, in addition to lossy.
I think most people would categorize that the same as overflow caused by
narrowing conversion and as such should be a run time check like any other
narrowing conversion. As such it should be supported with runtime overflow
check.
On Fri, Mar 10, 2017, 11:11 AM Peter Dimov via Boost
John McFarlane wrote:
Conversion from integers to real numbers is not so much *unsafe* as *lossy*. Users intuitively understand that `int n = 0.5;` will not define a variable with value 0.5 and this is often exactly what they intend, e.g.: [...]
Converting to int a floating-point value that (absent the fractional part) cannot be represented in int is undefined behavior, so it is, in fact, unsafe, in addition to lossy.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 3/10/17 11:10 AM, Peter Dimov via Boost wrote:
John McFarlane wrote:
Conversion from integers to real numbers is not so much *unsafe* as *lossy*. Users intuitively understand that `int n = 0.5;` will not define a variable with value 0.5 and this is often exactly what they intend, e.g.: [...]
Converting to int a floating-point value that (absent the fractional part) cannot be represented in int is undefined behavior, so it is, in fact, unsafe, in addition to lossy.
Actually I missed that as well. I was permitting conversion of an int to a float forgetting that this will change values for large integers. So I should trap this as well. Robert Ramey BTW - when are we going to a comments form Peter Dimov?
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Robert Ramey wrote:
BTW - when are we going to a comments form Peter Dimov?
I read the documentation, and here are some comments on that: - there is an example that explains that safe<long> is not implicitly convertible to int, even though long is. I think that it should be, subject to the usual constraint that the value can be represented in an int. - that safe_signed_range and safe_unsigned_range have a default policy of native seems wrong to me. The point of these classes is to represent, well, ranges, and arithmetic by default (if not always) should produce the appropriate range of the result, that is, the equivalent of 'automatic'. In fact, as the ranges have no template parameter for the underlying type, I'm not entirely sure what 'native' is supposed to do here. - it's not clear why safe_signed_literal and safe_unsigned_literal need policies at all, as all the arithmetic there happens at compile time. There's never a need to promote or throw. I've misplaced the announcement e-mail, is https://github.com/robertramey/safe_numerics/ the right place for the code? Because if it is, it doesn't follow the Boost convention for the include directory - the headers aren't in include/boost/safe_numerics but in include. There's also no Jamfile in the test directory, which is of minor importance compared to the include issue. The library just doesn't fit into the Boost structure in its present state.
- that safe_signed_range and safe_unsigned_range have a default policy of native seems wrong to me.
In fact, the promotion policy can be eliminated entirely if the underlying
type is always made automatic.
safe<int> x( 5 ); // safe_base
In fact, the promotion policy can be eliminated entirely if the underlying type is always made automatic.
Specifically, instead of the current arrangement where all user-visible
templates resolve to safe_base
On 3/11/17 2:56 PM, Peter Dimov via Boost wrote:
Robert Ramey wrote:
BTW - when are we going to a comments form Peter Dimov?
I read the documentation, and here are some comments on that:
- there is an example that explains that safe<long> is not implicitly convertible to int, even though long is.
I think the documentation is wrong - I'll double check. It's worked well for me to build the documentation in parallel with code. But sometimes things don't get clarified until later and the documentation has to be revised. So this can happen.
I think that it should be, subject to the usual constraint that the value can be represented in an int.
Right.
- that safe_signed_range and safe_unsigned_range have a default policy of native seems wrong to me. The point of these classes is to represent, well, ranges, and arithmetic by default (if not always) should produce the appropriate range of the result, that is, the equivalent of 'automatic'. In fact, as the ranges have no template parameter for the underlying type, I'm not entirely sure what 'native' is supposed to do here.
hmmm - very astute observation. in effect it seems I've intertwined "automatic" with native. It's more ineresting in a compound expression where the intermediate results returned are ranges. The idea here is to keep the type from growing beyond what holding the result requires. This will diminish the need for runtime checking and the conversion to larger types which presumable would be slower. Then there is the question that assigning "automatic" to the result of a binary expression would conflict with the operand of the next higher binary expression. This conflict then stops the process. I'll really have to think about this a like arrive at some sort of compromise (sigh).
- it's not clear why safe_signed_literal and safe_unsigned_literal need policies at all, as all the arithmetic there happens at compile time. There's never a need to promote or throw.
default is void. So that when they are combined with some other safe type, the policy of the other operand is used. The prevents two safe literals from being added though.
I've misplaced the announcement e-mail, is https://github.com/robertramey/safe_numerics/ the right place for the code? Because if it is, it doesn't follow the Boost convention for the include directory - the headers aren't in include/boost/safe_numerics but in include. There's also no Jamfile in the test directory, which is of minor importance compared to the include issue. The library just doesn't fit into the Boost structure in its present state.
I'm aware of this. Boost doesn't require final form for a review. I used a form which easier for people to use without adding it to their boost tree, running b2, etc. Also b2 is a big obstacle for most people so I supported CMake which is easier for me as well. If it gets accepted into boost, it will have to be updated to fit the boost conventations. Robert Ramey
Robert Ramey wrote:
I'm aware of this. Boost doesn't require final form for a review. I used a form which easier for people to use without adding it to their boost tree, running b2, etc. Also b2 is a big obstacle for most people so I supported CMake which is easier for me as well.
Well... Given that I already have a Boost development setup, it'd have been much easier for me if I could drop the final form in libs/safe_numerics (or wherever it's meant to go) and b2 my way there. But fair enough.
On 3/10/17 10:55 AM, John McFarlane via Boost wrote:
I'm reviewing this library as someone who has tried to use it in the past, has made minor contributions to it and has a vested interest in improving numeric support for C++ developers. However, I have very little knowledge of the code and relatively little experience of Boost and the Boost community. I hope my comments are helpful nevertheless.
Thank you for taking the time to do this.
tl;dr: make `safe<>` more like the type I describe in [ http://wg21.link/p0554#componentization-safety] or at least ensure that the API can evolve toward this without backward-breaking changes.
from your link:
The integral (signed) and cardinal (unsigned) class templates from
[P0106] perform such a role using non-type template parameters, e.g.
template
I do feel that a safe integer type should be available to as many people as possible and Robert's approach is on the right track. But given the choice, I would make some significant API changes. Any of these which can be added incrementally should not delay acceptance of safe_numerics into Boost.
I'm all ears. One thing I do NOT see is saturate. I've seen it mentioned in several places - but I'm not convinced.
The goal of making `safe<int>` a drop-in replacement for int is essential. Agreed
But further, `safe<Rep>` should be a drop-in replacement for `Rep` where `Rep` is any integer type which exhibits "surprising" results, e.g. through UB of overflow / divide-by-zero or indeterminate initial value. `Rep` should not just be any type for which `std::is_integral_v<Rep>` is true
This is my intention. The requirements that for type T it be supported by numeric_limits<T> with a number of settings - is_specialized, is_integer, max(), min() (or lowest). etc. BTW safe<T> also should be supported if T meets these requirements. I haven't tested this though. I might bump up against some implementation oversight.
ideally, `safe
`, while pointless, should compile and function like `safe<Rep>`. Maybe - from a completeness point of view, but I don't see any practical usage for such a type.
Unfortunately, `safe<int>` fails to be a drop-in
replacement for fundamental integer types for a number of reasons.
Firstly, conversion from floating-point types is not fully supported [ https://github.com/robertramey/safe_numerics/issues/13].
I'm not sure this is still true. I did make changes in response to this issue, but I haven't exhaustively tested inter operability with floating point types. If it doesn't it's a bug - not a design issue.
This alludes to a wider issue with the API of this library: it tries to do too much. Conversion from integers to real numbers is not so much *unsafe* as *lossy*. Users intuitively understand that `int n = 0.5;` will not define a variable with value 0.5 and this is often exactly what they intend, e.g.:
Current behavior is: safe<int> n; n = 0.05 // will trap at compile time with error n = static_cast<int>(0.05); // will compile OK My personal belief is that this is correct behavior. Note that I'm not a saint. I DO permit n to be uninitialized. I did this because it's so common and I want the library to be useful on legacy and embedded code. I've got a lot of reservations about this and could change my mind.
void sample(map
& histogram, double v) { ++ histogram[v]; } If implicit conversions to and from other numeric types is not supported, this prevents potential users from converting existing programs. Bear in mind that implicit conversions do not prevent `safe<>` from catching run-time errors.
The library is meant to support implicit conversions which do not change arithmetic values. I do believe that your example will work. See the example in the tutorial section labeled "Programming by Contract is Too Slow" your example includes ++historgram[v] where v is a double. My version would choke on that- as I think it should.
And if one wishes to catch risky narrowing conversions, they already have tools to do this, e.g. GCC's `-Wconversion`. In short, it's a separate concern. This isn't portable and (I believe results in a compile time error). Limited utility as far as I'm concerned.
Another example of overreach of the API is in governing the results of arithmetic operations. Via policy choice, `safe<int>` can produce auto-widening functionality, e.g. returning a 16-bit result from a multiply operation with 8-bit operands. This behavior belongs in a whole different class. It achieves overlapping safety wins but it complicates the interface. I believe that the correct approach is to produce a type which is dedicated to performing run-time overflow checks and another type which performs automatic widening and to combine them, e.g.:
// simplified for clarity, see: wg21.link/p0554#composition-safe_elastic_integer template<typename Rep> class auto_widening_integer; template<typename Rep> class safe_integer; template<typename Rep> using extra_safe_integer = safe_integer
; Of the two component classes, `auto_widening_integer` can be used to perform limited overflow-avoidance with zero run-time overhead. (The penalty is that if, say, you multiply two 32-bit integers together, you get back a 64-bit result.) Meanwhile, `safe_integer` provides iron-clad run-time safety checking and obeys the promotion rules of its `Rep` type. The combination of them reduces the number of run-time checks by reducing the chance of overflow.
This solution is more complex, agreed
but the individual components are simpler and self-sufficient. maybe
I'm a great believe in de-coupling. But I'm skeptical that a library with this functionality can be realised in the manner outlined above. I also think this approach would conflict with the goal of "almost drop-in replaceability" I have factored out the functions of type promotion and error handling into the policy classes for which there are obvious default implementations. And the library includes most required policies so that users won't have to write their own. I believe that it puts the customization in the right place.
Unfortunately, due to these kinds of design choices, I have not spent more time attempting to integrate Robert's library with my own. I would dearly like to include `safe<>` as part of a wider set of numeric components which users can then combine to their hearts content.
We've factored things in a different way. So I'm not convinced that there is a way to compose safe numerics and the ideas outlined in the papers you sight is a meaningful way. My conclusion after looking at the papers is that we're all attempting to address the same important problem in different ways. My way places high priority in handling legacy code. The other way seems more appropriate for use in creating new code which is meant to support correctness from the start - building in quality rather than trying to add it on as I do. Details of how I think that
might come about can be found in (shamelessly plugged) paper, [wg21.link/p0554].
I accept that's a tall order. At the very least I would like developers of substantial projects to be able to replace build-in integers with the following:
#if defined(NDEBUG) template<typename Rep> using Integer = Rep; #else template<typename Rep> using Integer = safe<Rep>; #endif
That is certainly my intention. I believe this usage is well supported by the safe numerics library.
Hope that helps, It does. Thanks for your review.
John
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Fri, Mar 10, 2017 at 11:59 AM Robert Ramey via Boost < boost@lists.boost.org> wrote:
I'm reviewing this library as someone who has tried to use it in the
On 3/10/17 10:55 AM, John McFarlane via Boost wrote: past,
has made minor contributions to it and has a vested interest in improving numeric support for C++ developers. However, I have very little knowledge of the code and relatively little experience of Boost and the Boost community. I hope my comments are helpful nevertheless.
Thank you for taking the time to do this.
tl;dr: make `safe<>` more like the type I describe in [ http://wg21.link/p0554#componentization-safety] or at least ensure that
the
API can evolve toward this without backward-breaking changes.
from your link: The integral (signed) and cardinal (unsigned) class templates from [P0106] perform such a role using non-type template parameters, e.g.
template
class integral; where Crng specifies the range of the value in bits and overflow is an enum which specifies what happens if overflow is detected. The values of overflow include: undefined - much like signed integer overflow behavior; abort - forces program termination; exception - throws an exception; saturate - limits the value to the maximum/minimum allowed by the range.
To my mind, this seems very very similar to my own formulation
template< class T, class PP = native, class EP = throw_exception. struct safe;
EP can support policies equivalent to undefined, abort, throw exception as well as trap at compile time. So as I said - it's to me they are almost exactly the same interfaces.
This probably isn't clear enough in my paper but P0106 is Lawrence Crowl's fixed-point paper - not mine. In general, I argue for more type template parameters and fewer non-type template parameters. I would argue that `EP` is a "necessary evil" - rather like the second parameter of `std::vector`. However, `EP` should be implicitly specified by `T`. `T` already does whatever promotion `T` does, so to speak. So just do whatever naked `T` would do WRT promotion.
I do feel that a safe integer type should be available to as many people as possible and Robert's approach is on the right track. But given the choice, I would make some significant API changes. Any of these which can be added incrementally should not delay acceptance of safe_numerics into Boost.
I'm all ears. One thing I do NOT see is saturate. I've seen it mentioned in several places - but I'm not convinced.
You mean you don't see the need for it? It comes up quite often as a strategy for dealing with overflow and a safe integer type seems like a good place to house it. Even it it isn't implemented now, a policy interface which allows users to implement it is going to give the library more users who use it to make their integers safe - surely the point of the exercise here.
The goal of making `safe<int>` a drop-in replacement for int is essential. Agreed
But further, `safe<Rep>` should be a drop-in replacement for `Rep` where `Rep` is any integer type which exhibits "surprising" results, e.g. through UB of overflow / divide-by-zero or indeterminate initial value. `Rep` should not just be any type for which `std::is_integral_v<Rep>` is true
This is my intention. The requirements that for type T it be supported by numeric_limits<T> with a number of settings - is_specialized, is_integer, max(), min() (or lowest). etc. BTW safe<T> also should be supported if T meets these requirements. I haven't tested this though. I might bump up against some implementation oversight.
ideally, `safe
`, while pointless, should compile and function like `safe<Rep>`. Maybe - from a completeness point of view, but I don't see any practical usage for such a type.
Unfortunately, `safe<int>` fails to be a drop-in
replacement for fundamental integer types for a number of reasons.
Firstly, conversion from floating-point types is not fully supported [ https://github.com/robertramey/safe_numerics/issues/13].
I'm not sure this is still true. I did make changes in response to this issue, but I haven't exhaustively tested inter operability with floating point types. If it doesn't it's a bug - not a design issue.
This alludes to a wider issue with the API of this library: it tries to do too much. Conversion from integers to real numbers is not so much *unsafe* as *lossy*. Users intuitively understand that `int n = 0.5;` will not define a variable with value 0.5 and this is often exactly what they intend, e.g.:
Current behavior is:
safe<int> n; n = 0.05 // will trap at compile time with error n = static_cast<int>(0.05); // will compile OK
My personal belief is that this is correct behavior.
Note that I'm not a saint. I DO permit n to be uninitialized. I did this because it's so common and I want the library to be useful on legacy and embedded code. I've got a lot of reservations about this and could change my mind.
I would make both choices differently. Because you can assign 0.05 to int, you should also be able to assign 0.05 to safe<int> or else it's inherently not a drop-in replacement. And of the list of things that makes fundamental scalar types unsafe, indeterminate value of default-initialized instances ranks higher than either overflow or divide-by-zero in my experience. Fixing this would in no way break legacy code. It will impose a run-time cost but normally this will be optimized out.
void sample(map
& histogram, double v) { ++ histogram[v]; } If implicit conversions to and from other numeric types is not supported, this prevents potential users from converting existing programs. Bear in mind that implicit conversions do not prevent `safe<>` from catching run-time errors.
The library is meant to support implicit conversions which do not change arithmetic values. I do believe that your example will work. See the example in the tutorial section labeled "Programming by Contract is Too Slow"
your example includes ++historgram[v] where v is a double. My version would choke on that- as I think it should.
That is where we disagree.
And if one wishes to catch risky narrowing conversions, they already have tools to do this, e.g. GCC's `-Wconversion`. In short, it's a separate concern. This isn't portable and (I believe results in a compile time error). Limited utility as far as I'm concerned.
Not if the conversion uses `static_cast`: https://godbolt.org/g/13aaO3
Another example of overreach of the API is in governing the results of arithmetic operations. Via policy choice, `safe<int>` can produce auto-widening functionality, e.g. returning a 16-bit result from a multiply operation with 8-bit operands. This behavior belongs in a whole different class. It achieves overlapping safety wins but it complicates the interface. I believe that the correct approach is to produce a type which is dedicated to performing run-time overflow checks and another type which performs automatic widening and to combine them, e.g.:
// simplified for clarity, see: wg21.link/p0554#composition-safe_elastic_integer template<typename Rep> class auto_widening_integer; template<typename Rep> class safe_integer; template<typename Rep> using extra_safe_integer = safe_integer
; Of the two component classes, `auto_widening_integer` can be used to perform limited overflow-avoidance with zero run-time overhead. (The penalty is that if, say, you multiply two 32-bit integers together, you get back a 64-bit result.) Meanwhile, `safe_integer` provides iron-clad run-time safety checking and obeys the promotion rules of its `Rep` type. The combination of them reduces the number of run-time checks by reducing the chance of overflow.
This solution is more complex, agreed
but the individual components are simpler and self-sufficient. maybe
I'm a great believe in de-coupling. But I'm skeptical that a library with this functionality can be realised in the manner outlined above.
I also think this approach would conflict with the goal of "almost drop-in replaceability"
I have factored out the functions of type promotion and error handling into the policy classes for which there are obvious default implementations. And the library includes most required policies so that users won't have to write their own. I believe that it puts the customization in the right place.
Unfortunately, due to these kinds of design choices, I have not spent more time attempting to integrate Robert's library with my own. I would dearly like to include `safe<>` as part of a wider set of numeric components which users can then combine to their hearts content.
We've factored things in a different way. So I'm not convinced that there is a way to compose safe numerics and the ideas outlined in the papers you sight is a meaningful way.
My conclusion after looking at the papers is that we're all attempting to address the same important problem in different ways. My way places high priority in handling legacy code. The other way seems more appropriate for use in creating new code which is meant to support correctness from the start - building in quality rather than trying to add it on as I do.
I draw the opposite conclusion but I'm glad we both have the same aim to provide a low-resistance path to safe arithmetic.
Details of how I think that
might come about can be found in (shamelessly plugged) paper, [wg21.link/p0554].
I accept that's a tall order. At the very least I would like developers of substantial projects to be able to replace build-in integers with the following:
#if defined(NDEBUG) template<typename Rep> using Integer = Rep; #else template<typename Rep> using Integer = safe<Rep>; #endif
That is certainly my intention. I believe this usage is well supported by the safe numerics library.
Hope that helps, It does. Thanks for your review.
My pleasure. Good luck!
John
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
AMDG On 03/10/2017 09:29 PM, John McFarlane via Boost wrote:
On Fri, Mar 10, 2017 at 11:59 AM Robert Ramey via Boost < boost@lists.boost.org> wrote:
I'm all ears. One thing I do NOT see is saturate. I've seen it mentioned in several places - but I'm not convinced.
You mean you don't see the need for it? It comes up quite often as a strategy for dealing with overflow and a safe integer type seems like a good place to house it. Even it it isn't implemented now, a policy interface which allows users to implement it is going to give the library more users who use it to make their integers safe - surely the point of the exercise here.
It should be possible to handle saturation
with the following extension:
- If the ExceptionPolicy returns non-void,
then the value it returns will be converted
to the result type (the safe type, that is,
not the stored type).
This requires two additional pieces to work:
- The ExceptionPolicy needs to distinguish
between positive and negative overflow.
- There need to be special values that can
convert to the min and max values of safe_base.
// Hypothetical saturate ExceptionPolicy:
template<class T>
struct min_value_t {
constexpr T operator()() const
{ return std::numeric_limits<T>::min(); }
};
template<class T>
struct max_value_t {
constexpr T operator()() const
{ return std::numeric_limits<T>::max(); }
};
template
struct special_value_t {
template
Note that I'm not a saint. I DO permit n to be uninitialized. I did this because it's so common and I want the library to be useful on legacy and embedded code. I've got a lot of reservations about this and could change my mind.
<snip>. And of the list of things that makes fundamental scalar types unsafe, indeterminate value of default-initialized instances ranks higher than either overflow or divide-by-zero in my experience.
Why not let the ExceptionPolicy decide? Just add an uninitialized_error(const char*) to ExceptionPolicy. Combined with my hypothetical extension above, this would also allow automatic zero-initialization. Incidentally, I'm not personally very concerned about indeterminate values, as valgrind works quite nicely for flushing out such problems and they're usually easy to fix. In Christ, Steven Watanabe
On 3/10/17 8:29 PM, John McFarlane via Boost wrote:
On Fri, Mar 10, 2017 at 11:59 AM Robert Ramey via Boost < <snip>
I'm all ears. One thing I do NOT see is saturate. I've seen it mentioned in several places - but I'm not convinced.
You mean you don't see the need for it? It comes up quite often as a strategy for dealing with overflow and a safe integer type seems like a good place to house it. Even it it isn't implemented now, a policy interface which allows users to implement it is going to give the library more users who use it to make their integers safe - surely the point of the exercise here.
My major concern was that it pollute the currently elegant design (my characterization) but mucking things up with some that is orthogonal to the unitary goal of the library - program correctness. In the mean time, Stephan Watanabee's post describes how small tweaks in the ExceptionPolicy concept would permit this functionality to be implemented as an exception policy. Looked at in his manner, I would love to see an exception policy crafted to implement this. It would be an excellent demo for the documentation.
The goal of making `safe<int>` a drop-in replacement for int is essential. Agreed
This alludes to a wider issue with the API of this library: it tries to do too much. Conversion from integers to real numbers is not so much *unsafe* as *lossy*. Users intuitively understand that `int n = 0.5;` will not define a variable with value 0.5 and this is often exactly what they intend, e.g.:
your example includes ++historgram[v] where v is a double. My version would choke on that- as I think it should.
That is where we disagree.
And if one wishes to catch risky narrowing conversions, they already have tools to do this, e.g. GCC's `-Wconversion`. In short, it's a separate concern. This isn't portable and (I believe results in a compile time error). Limited utility as far as I'm concerned.
Not if the conversion uses `static_cast`: https://godbolt.org/g/13aaO3
LOL - that is very interesting - looks like someone is introducing
Another example of overreach of the API is in governing the results of arithmetic operations. Via policy choice, `safe<int>` can produce auto-widening functionality, e.g. returning a 16-bit result from a multiply operation with 8-bit operands. This behavior belongs in a whole different class. It achieves overlapping safety wins but it complicates the interface. I believe that the correct approach is to produce a type which is dedicated to performing run-time overflow checks and another type which performs automatic widening and to combine them, e.g.:
// simplified for clarity, see: wg21.link/p0554#composition-safe_elastic_integer template<typename Rep> class auto_widening_integer; template<typename Rep> class safe_integer; template<typename Rep> using extra_safe_integer = safe_integer
;
Looking at this I see:
template<class Rep>
using auto_widening_integer = safe
My conclusion after looking at the papers is that we're all attempting to address the same important problem in different ways. My way places high priority in handling legacy code. The other way seems more appropriate for use in creating new code which is meant to support correctness from the start - building in quality rather than trying to add it on as I do.
I draw the opposite conclusion but I'm glad we both have the same aim to provide a low-resistance path to safe arithmetic.
Having said this, I recall that a GSOC project has implemented the Crowle proposal. I'll try to track this down an see if we can get it added to the boost library incubator Robert Ramey PS Looks like the over quoting nannie is on vacation. I think posts are improved by <snip> where points previously address which are not being added to are removed. It makes this easier to find. RR
participants (4)
-
John McFarlane
-
Peter Dimov
-
Robert Ramey
-
Steven Watanabe