On Oct 8, 2013, at 2:38 PM, Daniel James
On 8 October 2013 02:09, Rob Stewart
wrote: On Oct 7, 2013, at 5:39 PM, Daniel James
wrote: On 6 October 2013 12:04, Rob Stewart
wrote: If the exception is triggered by an unknown condition, such as you're advocating, then there can be no handler for it. It must, of necessity, unwind to the top. Code cannot rightly continue as if there had been no exception. In that case, there's little gained over an assertion, save for the possibility that other threads may be able to complete independent work.
later:
The likely outcome of failure in this regard, in a non-debug build, is a null pointer dereference elsewhere. Unchecked, that means a segfault. IOW, the problem will be found.
So you're saying that when it's an exception, it's triggered by an "unknown condition" not by a null, which is surely a known condition.
No
"No, there's an unknown condition", or "no, that's not what I'm saying"?
I was not saying that it wasn't a null pointer, because it obviously was. I was saying that the exception handler can have no idea what cause the null pointer that was clearly not expected. If a null was anticipated, the programmer would have tested for it. How, then, can the handler account for the problem to allow the program to continue safely?
And you're claiming that after an exception the code will continue as if there was no exception.
No, Pete, I think it was, advocated letting thousands of tasks continue while a few failed.
How is that continuing as if there was no exception? There are strategies for recovering from such exceptions, a simple one is the write the code in question in a pure functional style.
Since the handler cannot know the cause of the null pointer, which would have been addressed by a conditional otherwise, continuing after the handler is pretending that it never happened. Functional style programming and careful use of RAII are ways to ensure proper cleanup during unwinding, but they require the kind of careful programming that would have inserted a conditional to test for a null pointer.
If a null pointer occurs somewhere in your code and this class' ctor throws an exception, the handler has no idea of the source of the null pointer. Thus, the you have an unknown condition as far as the handler is concerned. (I had been discussing a handler in main() or a thread procedure, as opposed to one very near the object mis-construction.)
If you're doing individual tasks you know which one was associated with the throw. For many cases that's all you need to know. This does depend on the program, which is why the decision to catch logic errors should be up to the programmer, who knows it better than we do. You don't have to catch the exceptions if you don't want to.
There are certainly exceptions (pun intended) to most any rule, but a segfault and core dump, for example, are preferable to std::terminate() due to an unhandled exception.
OTOH, if a null is actual stored in the object, then it will eventually be dereferenced, in the general case anyway, and the segfault will indicate which instance was bad. From that you should be able to discover its genesis, at least if the object's address is consistent or the pointee's type is unique. That makes finding the source of the null pointer somewhat more tractable without a debugger.
You'd get better data if the failure was in the constructor. It's more likely that you could associate it with the source of the pointer, which might have been destructed by the time the pointer is dereferenced. (Remember that my main argument is that there should always be a check).
In an optimized build, with no preceding assertion, how does throwing an exception help identify the source, short of a stack trace containing exception as one poster mentioned he had?
And, of course, you don't always get the segfault data, you sometimes don't even get notified that there is a bug.
That is clearly a problem.
You're making the assumption that after a bug is discovered, you'll be quickly notified and nothing bad will happen in the meantime. Unfortunately, that isn't true at all.
We all make assumptions based upon the apps we build and how we interact with our users.
Consider the case of taking the address of a static object.
I hope you're using a do-nothing custom deleter.
There's no need to check for null, so there's no need to risk pipeline stalling on a failed branch prediction or cache invalidation due to the exception construction and throwing code you'd add to the ctor otherwise.
Well, there's premature optimization for you.
Library code can't afford to assume such things don't matter.
This is a class that's concerned with safety, not efficiency. If there's a safe option and an unsafe option, then the safe option should be the default, and the unsafe option should be the verbose one.
As I've said numerous times, factory functions could be created that throw and others that don't. The ctor shouldn't. You could add a no_throw_t overload of the ctor, I suppose.
Considering your concerns, why are repeatedly creating shared pointers from a static object? This would have to be a very tight loop if a null check on a value that you have to access anyway is going to make a difference. In which case you might as well create one non-null shared pointer and reuse that, as it will be faster.
I offered one example of how one can know that the check is unnecessary. I wasn't implying a loop repeatedly creating a pointer to one static object. I get the feeling you're being purposely difficult now. Perhaps there's an array of objects. Perhaps the objects are in a map. There are numerous sources of non-null pointers.
But if you can't do that, what will this check really cost? The branch prediction should be for the non-null case, if you're concerned about that you can add a hint in some compilers. I'd actually hope that the compiler would optimise the check away in many cases, especially when initialising from a static object.
Optimizations are possible, but they can be thwarted by many factors.
And the exception creation code should be separate the normal code anyway, so it shouldn't stress the cache too much.
If you say so. The typical design would put that code in the ctor body, not a function.
Then you've got the cost of reserving memory for that shared pointer's count, that's expensive. You might try to avoid that by using enable_shared_from_this, but then you've got the weak pointer check.
Are you saying the overhead of that would swamp the pipeline stall, if it occurs? You may be right, but the stall can still add a significant delay.
You're also no longer thread safe and have to have a shared_ptr somewhere, so there's little argument against using a static non-null shared pointer which will avoid both the weak pointer check and the null check, which is surely preferable since you're so concerned about performance.
I have no idea where that came from.
But whatever you do creating this shared pointer is going to require memory manipulation on the heap - that's not free either.
OK
But really, the only way to tell is to try it out in a real program.
Right ___ Rob (Sent from my portable computation engine)