No, you're giving too much credit to compilers. Compilers cannot analyze at the same level as humans do. For example, if the valid index is stored in a data member in one method of a class and used in another, and the first method is required to be called first (which might also be enforced by a runtime check via another member variable), the compiler is not able to know that the index is always valid. At least, I haven't seen such cleverness.
I never bought "most of the time you already know the index is within bounds" because it is equivalent to "most of the time your code is correct, let's not use checks/unit tests/hardening/..." If what you say is true, then buffer overflows would be non-existant. Are they?
Mistakes happen, I'm not arguing with that. But defensive programming is not the solution because it doesn't save you from these mistakes. Code that passes invalid index to at() is just as incorrect as the one that does so with operator[]. No. Defensive programming is a safety net. If you always use at() and only in specific scenarios [] then at least your application will crash with a possibly good stacktrace or you can catch the exception in
That's why there is usually a checked and an unchecked method. Your
usual accesses though are: `for(auto c: string) ...`, for(int i=0;
i No, the check gets optimized away if you do everything right. As I said above, you can't count on that.
So what? Marginally lower performance if even measurable but safe from
buffer overflows.
You can't recover from an unexpected exception. And it is unexpected
by account that we're talking about a human mistake.
You can: For e.g. a connection/parser/... you terminate the connection
and log the issue. At the least you can catch it in main or some
exception hook to log a backtrace and exit gracefully.
The correct way of tackling mistakes is human reviewing, testing and
debugging. For the debugging part, a crash with a saved backtrace is
more useful than an exception with a meaningless error message like
"fixed_string::at", as many standard libraries like to do. You don't get a crash, at least not with a "meaningful backtrace" if you
don't use exceptions. You get UB, a security issue and possibly a crash
in some other part later instead of where the error was detected. And
why would "human reviewing, testing and debugging" be more reliable than
"human codewriting"? It is still a human and humans make mistakes.
Better use a safety net.