On 9 Feb 2016 at 12:04, Emil Dotchevski wrote:
It feels strange to have to defend the use of exceptions for reporting errors in C++, on the boost development board of all places. There are many other advantages, for example when returning errors there is no such thing as error-neutral contexts in your program, which increases coupling. Yes, in some contexts one can't afford to use exceptions, but all general complains that exception handling causes performance or any other problems are theoretical, at best.
I've noticed a lot of people taking issue with the overhead of exceptions really mean to say they take issue with the *indeterminacy* introduced by exceptions, and even that often is really a proxy for the phrase "indirect/implicit/hidden/non-obvious use of malloc() or free()" which is the main source of unpredictable exception throws. In other words, people don't mind predictable exceptions anything like as much as potential unpredictable unknowable overheads. My current contract has my coworkers highly surprised that fixed worst case latency code can be easily written using the STL. They had assumed that games and audio development banned use of the STL and exceptions due to unpredictable execution times. They are not wrong, you just need to learn off which bits of the STL could call malloc or have worse than linear execution times and which bits never will, and only use the latter in hot code paths. That's really a training/familiarity(/maintenance) problem in the end.
With just a little extra libclang tooling (some of which I plan to write) this style idiom ought to be mathematically provable as correct in the functional programming sense, which would be cool, not least for those programming nuclear reactors etc.
Could you prove anything mathematically in the presence of side effects and pointers?
It's not my field so everything I'm about to say next is hearsay, but back during the nuclear reactors certification for QNX (which is written in C) I noticed you must always assume that functions you call behave as specified and the only goal is to prove the current function you are proving is no worse than the things it calls. From what I saw, you can't prove a program, but you can prove a program if you assume everything it calls is correct and you don't do a long list of things in C which would break the proof. They had LLVM based tooling which generated the proofs from the AST or flagged code where you were doing something not permitted, it appeared to work very well. Obviously C++ is orders of magnitude harder, but with a restrictive enough list of things you can't do I'm sure it's achievable. Whether such a program would still qualify as C++ is an open question. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/