Myself and Peter feel this is worth fixing despite the increased runtime overhead. Others, including one of its original inventors, feel it is too much overhead.
I must ask once again how do we know it is "too much" overhead? Where is the real-world program which would be impossible or difficult to write otherwise?
For some reason I never get an actual answer, and instead it's always something along the lines of "I'm an expert, trust me, I know". Well, I don't. Sorry.
A little unfair. I've supplied numbers to you before, but as with all such numbers, they're totally arbitrary and mostly meaningless. All anyone can really say is that with some real world application X, that Y was the case with that application. And that's about it. This particular debate comes down to how expensive is a virtual function call, and how frequently is code going to be doing `if(ec) ...`. As is fairly obvious by now, I consider virtual function call overhead to be pretty much in the stochastic noise of processor timings except on in-order CPUs [1]. As in, can be generally ignored as unimportant. I also think that `if(ec) ...` is not going to be called frequently, no more than twice per failing operation, and usually once. So to me, my proposed fix solves a real problem of correctness in lots of code out there in the wild which is currently subtly broken for virtually no cost. Therefore, it's a slam dunk. It should be implemented and the C++ standard upgraded to match. Obviously others disagree with me.
This applies to C++ exception handling and error handling in general. Yes, I know, there is overhead -- but where is the proof that 1) it matters, and 2) it's not worth it?
The "it matters" part depends on your attitude to the semantics behind observation of error codes. What does the code being 0 actually mean? Also, as Peter has explained several times now, composition of error_code returning functions in generic code where the category is unknown is hard without a generic facility to determine if some error_code means failure or not. For that alone, this change should be implemented. It breaks no existing code whatsoever, but fixes the semantics into what the programmer probably intended. Niall [1]: Of course raw benchmarks show virtual function calls are slower to no function call due to inlining in some artificial benchmark. But in real world code, the cost of a virtual function relative *to doing any work at all* is insignificant. While your doing work code is stalled on memory or whatever, the CPU is off busy executing virtual function calls and so on. Once you average it out, virtual function calls approach free in most real world code use cases on modern out-of-order CPUs. -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/