On Oct 5, 2013, at 10:32 AM, Daniel James
On 5 October 2013 14:49, Stewart, Robert
wrote: Pete Bartlett wrote:
On 4 October 2013 17:45, Eric Niebler
wrote: On 10/4/2013 9:20 AM, Matt Calabrese wrote:
but I definitely am against an exception for this type of programmer error.
This is the crux of it. If this condition really does represent a programmer error (and IMO in this case it does), then Matt is right. Throwing is wrong. Programmer error == bug == your program is already in some weird state. Continuing by throwing an exception and executing an arbitrary amount of code is not good.
I don't think this is always the case. For example, unexpected output from a parser should not be an indicator of an error in the global state.
There should be code between the two ensuring that the output of one doesn't violate the precondition of the other.
Just because there should be something, doesn't mean that there is. People get this wrong all the time.
There are many ways to write code wrong. That doesn't mean that violating a precondition should trigger an exception.
Precondition violations ==> assertions. Use BOOST_ASSERT. That gives people a way to hook the behavior while giving a sane default.
If throwing an exception when this is null is a bad idea, then not checking in release mode is surely a terrible one.
If the code isn't checking for precondition violations, then it isn't ready to handle the exception, either. If the programmer knows enough to handle the exception, then s/he knows enough to prevent it.
If you avoid excessive state and make good use of RAII, handling exceptions is generally easier than avoiding bugs. Of course, that does depend on what sort of software you're writing.
If the exception is triggered by an unknown condition, such as you're advocating, then there can be no handler for it. It must, of necessity, unwind to the top. Code cannot rightly continue as if there had been no exception. In that case, there's little gained over an assertion, save for the possibility that other threads may be able to complete independent work.
And if the programmer knows enough to avoid using a null pointer, then they why would they use a class that ensures that a pointer isn't null.
The reason is the same as using a reference in your code: once you've checked for a null pointer and formed a reference, you can avoid checking for null thereafter.
Anyway, there are two separate issues here:
1) Whether or not the check should always be present. 2) What should happen when the check fails.
My strongest argument concerns the first point, and I hope it's clearly the one I'm more concerned with. So I'm finding a bit odd that it's not really being addressed by most responses.
The point of the proposed class is that, once (properly) instantiated, it's a signal to code that a null check is unnecessary, just as using a value or reference is. The difference is that the proposed class can share ownership and extend the lifetime of the referenced object. Thus, the crux is proper construction. That, I think, brings us to your point 1. The non-null requirement cannot be made at compile time, unfortunately. The two options are to declare that a precondition and use an assertion, and to test for null and throw an exception when it's violated. It is possible to combine those approaches, however. The ctor can use a precondition and some or all of the factories can do a runtime check. (If some, the programmer can choose which to call.) Still, the programmer using this class can be expected to know that proper construction requires a non-null pointer and so take care to ensure that. After all, the programmer is typing the class name which indicates this expectation. The likely outcome of failure in this regard, in a non-debug build, is a null pointer dereference elsewhere. Unchecked, that means a segfault. IOW, the problem will be found. ___ Rob (Sent from my portable computation engine)