On Tue, Jan 16, 2018 at 1:40 AM, Niall Douglas via Boost < boost@lists.boost.org> wrote:
Myself and Peter feel this is worth fixing despite the increased
runtime
overhead. Others, including one of its original inventors, feel it is too much overhead.
I must ask once again how do we know it is "too much" overhead? Where is the real-world program which would be impossible or difficult to write otherwise?
For some reason I never get an actual answer, and instead it's always something along the lines of "I'm an expert, trust me, I know". Well, I don't. Sorry.
A little unfair. I've supplied numbers to you before, but as with all such numbers, they're totally arbitrary and mostly meaningless.
I believe I was supporting your position in this case. If it fits the semantics, it should be a virtual function call unless there are problems with that, I was just pointing out that "but it's slow" is not a valid argument unless there is practical evidence to that effect.
This applies to C++ exception handling and error handling in general. Yes, I know, there is overhead -- but where is the proof that 1) it matters, and 2) it's not worth it?
The "it matters" part depends on your attitude to the semantics behind observation of error codes. What does the code being 0 actually mean?
By "it matters" I mean, what programs will be more difficult to write because the virtual function call is "too slow"? Just because this question is difficult or impossible to answer it doesn't mean it's unfair, because we have to evaluate the impact of a compromise. This can't be done in the abstract. Emil