On 6 October 2013 19:14, Matt Calabrese
On Sun, Oct 6, 2013 at 2:11 AM, Daniel James
wrote: How is that not introducing undefined behaviour? Say there's a function:
void foo(boost::shared_ptr<...> const& x) { if (!x) { something_or_other(); } else blah_blah(*x); }
By using shared_ptr_nonull, that check can be removed:
void foo(boost::shared_ptr_nonnull<...> const& x) { blah_blah(*x); }
But then a caller believes that their shared_ptr is never null, so they copy it into a shared_ptr_nonnull without checking for null first, and that action introduces the possibility of undefined behaviour where there was none before (given that there's always a possibility of bugs in non-trivial code).
No response to this example?
With a regular shared pointer, the possibility for UB is whenever you dereference. With a non-null shared pointer, the UB possibility is only during initalization/assignment, assuming that the programmer is doing so from a raw pointer
Undefined behaviour doesn't end at after the constructor, it sticks around and affects every other method call. So what was undefined behaviour is still undefined behaviour. And since before you can have null-based undefined behaviour in a shared pointer, it must first be set to null, the probability is strictly greater. It doesn't matter if there are less places it can happen. Btw. it isn't just raw pointers that need checking. It's quite hard to account for everything that needs to be checked.
Whenever you have a non-null shared pointer, you cannot have UB simply by dereferencing it. I know you will retort "but what if someone violated preconditions" again, but I will reiterate, that is the programmer's bug.
You do have a very odd definition of "cannot".
This is true with every single piece of code in the standard library that has specified preconditions and in any library at all that /properly/ specifies preconditions for functions. It's not simply by convention or doctrine or whatever you want to call it, it's because it is what makes sense. There is nothing special about a non-null shared pointer that changes this. If you violate the precondition it is your fault, not the library's. By providing named construction functions, we get by the issue partially, and these should be preferred whenever possible anyway.
As to whether or not non-nullness should be a precondition, I've explained why it should be a precondition as opposed to documented check/throw and I can't say much more if you simply don't see the rationale.
It's a bit arrogant to believe that if someone disagrees with you, it's because they don't understand your argument. I haven't called anything doctrine. If I was going to use a religious word, I think orthodoxy would be more appropriate.
...that's precisely the point. You /can't/ deal with it. That's why it's undefined. These are bugs that are to be found during testing, which is why you assert rather than specify check/throw behavior.
Do you really think all such bugs can be found in testing? This is part of the problem.
What I was getting at is that it's easier to avoid using exceptions as control flow, as you're always aware that you're doing it. You'll never do it accidentally, it's always a conscious activity. But it's harder to avoid violating pre-conditions, as you're usually unaware that you're doing it, since it's almost always done by accident. And also that using exceptions for control flow might go against our principles, but it'll still cause less harm than undefined behaviour would, since it has more of a chance of being predictable. To say that A is worse than B is not an endorsement of B, it's pointing out the problems in A.
My point isn't that preconditions, in a general sense, are easier or harder to work with than exceptions.
I wasn't explaining your point, I was explaining my point.