On Fri, Feb 12, 2016 at 2:22 AM, Niall Douglas
That said, whether or not a function throws exceptions is a design decision, nothing to do with run-time overhead. The reason, again, is
On 11 Feb 2016 at 18:00, Emil Dotchevski wrote: that
any exception handling overhead of calling a function can be easily eliminated, by inlining.
If the compiler can see all possible throw sites, and it is not called MSVC, then yes.
If you inline a function, the compiler can see when it throws and when it doesn't. Every compiler, even MSVC, will remove the exception handling overhead, as long as you inline, though of course you will incur overhead if an exception is actually thrown.
I don't think people's issue is about what happens in practice under optimisation. They want *guarantees*. They want to see noexcept on every function because it's a *guarantee* that no matter what the programmer does, no unbounded execution times will occur.
I know what people want, I've had my share of arguments with people who want noexcept no matter what.
This is why games, audio and HFT shops ban the STL and disable RTTI and exceptions. It's about assurance, not reality in practice.
I know that too. It's a free country, anyone can disable anything they want.
It is true that this will not make the function work faster in the case when it does throw, but at that time we're usually dealing with aborting an operation, it's an exceptional condition. I understand that this may be a problem in some weird cases but that's when I'd demand strong evidence rather than conspiracy theories and toy benchmark programs.
A lot of this debate isn't about empirical reality, if C++ programmers were managed by reality everyone would be using the STL with exceptions and RTTI turned on for years now.
Yep. Recently we shipped a game on Nintendo 3DS, 60 fps stereo, exception handling and RTTI enabled, STL used throughout. Peter helped me get boost::shared_ptr work on that platform, evidently we were the first developer to use Boost on that platform. :)
As I mentioned, it's about *assurance*: given a pool of average programmers, how can you (technical leads and management) be assured they will write on average low worst case latency C++? This is why I expect exceptions disabled C++ to remain around for a long time to come, and why AFIO v2 and Outcome will eventually support exceptions disabled.
Such arguments do push my buttons so I get involved, but I have very low tolerance for how much I'd compromise my library designs to accommodate incompetent technical leads. If you think that you've made your library harder to use without empirical improvements in performance, isn't that a problem? Emil