On 24/01/2018 14:53, Emil Dotchevski wrote:
But we're not even really talking about those things, we're talking about parameter validation and state preconditions.
I was talking about logic errors. Bugs.
Yes. Which (other than business logic errors) are usually the result of the above.
That depends on the API. It is the responsibilty of the caller to not violate the documented preconditions when calling a function, and if he does, that is a logic error. The whole point of defining preconditions is to say that all bets are off otherwise.
Yes. *But* it is useful to actually verify that preconditions are not violated by accident. Usually this is done only in debug mode by putting in an assert. And that is sufficient, when unit tests exercise all paths in debug mode. The checks are usually omitted from release builds for performance reasons. But some applications might prefer to retain the checks but throw an exception instead, in order to sacrifice performance for correctness even in the face of unexpected input. The more confident that you are (hopefully backed up by static analysis and unit tests with coverage analysis) that the code doesn't contain such logic errors, the more inclined you might be to lean towards the performance end rather than the checking end. But this argument can't apply to public APIs of a library, since by definition you cannot know all callers so cannot prove they all get it "right". To bring this back to Outcome: Some people would like .error() to be assert/UB if called on an object that has no error for performance reasons (since they will "guarantee" that they don't ever call it in any other case). Other people would prefer that this throws (or does something assert-like that still works in release builds), so that it always has non-UB failure characteristics in the case that some programmer fails to meet that "guarantee". Outcome supports this by using a configurable policy. Some other libraries support it by using BOOST_ASSERT. Many don't support it at all, which is unfortunate, and which (I believe) leads to the vast majority of bugs (that aren't caught during development).
Debug iterators exist to help find logic errors, not to define behavior in case of logic errors.
Those are the same thing.
(I'm using "undefined" in the English "it could be in one of many intermediate states" sense, not the Standard "dogs and cats living together" sense. Mutexes might be broken, the data might be silly, and the class invariant might have been violated, but it is probably still destructible.)
And I'm using its domain-specific meaning: moved-from objects don't have undefined state, they have well-defined but unspecified state.
If you prefer, then, what I was saying is that in the absence of outright memory corruption (double free, writes out of bounds, etc), then all objects should at all times be in *unspecified* but destructible states -- even after logic errors. They may contain incorrect results, or unexpected nulls, or otherwise not be in intended or expected states, but that shouldn't prevent destruction.
More to the point, this situation is NOT a logic error, because the object can be safely destroyed. Logic error would be if the object ends up in undefined state, which may or may not be destructible. You most definitely don't want to throw a C++ exception in this case.
If the invalid-parameter and out-of-bounds classes of logic errors are rigorously checked at all points before the bad behaviour even happens then the object won't ever end up in an undefined state to begin with -- merely an unexpected state from the caller's perspective. Obviously (it's turtles all the way down) if the checks themselves have incorrect logic then this doesn't really help; but that's what the unit tests are for.