On Wed, Jun 14, 2017 at 12:42 PM, Ion GaztaƱaga via Boost < boost@lists.boost.org> wrote:
In my opinion, at least after writing mission critical software, is that the main problem of exceptions is the non-explicit potentially infinite exit paths (as each thrown type can require a different action when catching) it adds in every function call, worsened with the non-static enforcement of what can be thrown.
When an error occurs (assuming you can't just bail out), it must get passed up the call stack, together with relevant data that is captured at the time it's detected, but also present in functions up the call stack, all the way to a function that can actually deal with the problem. If you don't use exceptions, then you're writing a lot of if( !success ) return error, which is prone to errors. As well, in environments where error handling doesn't use exceptions, the code tends to not always use RAII, so in addition to returning errors you need to manually release resources, which is also prone to errors (say hello to leaks). Semantically, however, exception handling only 1) writes the ifs for you, and 2) forces you to use RAII and to write exception-safe code (which is a good idea anyway.) With or without exceptions, the error handling paths as well as the work that must be done in case of errors (freeing resources, etc.) is identical. The supposed "unpredictability" of exceptions is not semantic but syntactic. Depending on the compiler, and depending on whether a function got inlined or not (even functions that are exception-neutral, e.g. neither throw nor catch exceptions), there might be overhead which sometimes may be significant, not to mention I have never seen a compiler that optimizes out throw statements. But even this is probably just theoretical. I've been asking for hard data that shows that in a given use case exception handling adds too much overhead. I keep hearing that such cases exist (and they probably do) but I'm yet to see a single one.