On 9 October 2013 10:33, Rob Stewart
I was not saying that it wasn't a null pointer, because it obviously was. I was saying that the exception handler can have no idea what cause the null pointer that was clearly not expected. If a null was anticipated, the programmer would have tested for it. How, then, can the handler account for the problem to allow the program to continue safely?
Better than if there's no check at all. But this really is going nowhere, so I'll just skip to something I said that you seem to have missed, or perhaps wasn't clear enough:
(Remember that my main argument is that there should always be a check).
As I said earlier, there are two separate parts to my argument: whether or not a check should be made, and what to do if it fails. Now, I was very clear that I feel my argument on whether the check is required is much stronger than my argument about exceptions. Most of the arguments have been about exceptions vs. undefined behaviour. So I still feel that the need for this check is much greater than the benefit from a possible segfault. If you don't, then I'm unlikely to change your mind. I'll just go through the efficiency points..
Library code can't afford to assume such things don't matter.
It can in this case. The cost of an allocation is far greater than the cost of a null check. If you really think the null check is too expensive, then use shared_ptr.
This is a class that's concerned with safety, not efficiency. If there's a safe option and an unsafe option, then the safe option should be the default, and the unsafe option should be the verbose one.
As I've said numerous times, factory functions could be created that throw and others that don't. The ctor shouldn't. You could add a no_throw_t overload of the ctor, I suppose.
Making safe construction more verbose than unsafe construction is a bad idea. If you disagree with that, we're really not going to get anywhere.
I offered one example of how one can know that the check is unnecessary. I wasn't implying a loop repeatedly creating a pointer to one static object. I get the feeling you're being purposely difficult now.
If it isn't repeatedly done, then the check certainly is negligible.
Perhaps there's an array of objects. Perhaps the objects are in a map. There are numerous sources of non-null pointers.
Then you have to allocate memory for each one. Which is far more expensive than a null check.
But if you can't do that, what will this check really cost? The branch prediction should be for the non-null case, if you're concerned about that you can add a hint in some compilers. I'd actually hope that the compiler would optimise the check away in many cases, especially when initialising from a static object.
Optimizations are possible, but they can be thwarted by many factors.
I know, that's why I hedged my argument ("hope" and "many cases"). That wasn't an important point anyway.
And the exception creation code should be separate the normal code anyway, so it shouldn't stress the cache too much.
If you say so. The typical design would put that code in the ctor body, not a function.
That line was a bit muddled, the point was that compilers should handle exceptions separately from the normal code.
Then you've got the cost of reserving memory for that shared pointer's count, that's expensive. You might try to avoid that by using enable_shared_from_this, but then you've got the weak pointer check.
Are you saying the overhead of that would swamp the pipeline stall, if it occurs? You may be right, but the stall can still add a significant delay.
Sorry, I was wrong there. 'shared_from_this' wouldn't require a null check, so there's no performance loss at all.