Chris, am I correct that this is your first post to boost-dev in nearly a decade? If so, welcome back and it's great to see you here again.
I have only skimmed this thread, but you appear to be operating under a misconception.
The error_code class itself deliberately does *not* imbue zero values with the meaning 'success' and non-zero values with the meaning 'failure'. An error_code simply represents an integer and a category, where the category identifies the source of a particular integer value. The specification of the error_code class carefully avoids making any judgement as to whether a particular value represents success or failure. The construct:
if (ec) ...
does not, in and of itself, mean 'if an error ...'. Instead, operator bool is specified to behave as the ints do, and the above construct should simply be read as 'if non-zero ...'.
Correct. And I don't think anyone on boost-dev who witnessed the Outcome v1 review is in doubt on the literal meaning of operator bool in error_code. But that's not what we're discussing really. Rather, I think that there is widespread agreement here that most code out there working with error_code has been written by programmers who when they write: if (ec) ... ... do think that they are writing "if there is an error, then ...". That is quite distinct from "if ec is not empty, then ..." and very distinct from "if ec's value is not zero, then ...". What I are discussing here is whether to do something about the common mispractice or not i.e. whether, and how much, to change or even break the system_error API so the incorrectly written code becomes correct.
Instead, the correspondence of particular error_code values to success or failure is context specific and is defined by an API. The filesystem and networking libraries do define zero to mean success, partly because they are specified in terms of POSIX operations that do likewise.
Sure. But you can surely see how sloppy usage has entered widespread practice here? If all current uses of error_code up until now just happen to work via "if(ec) ...", then nobody has reason to assume otherwise. We also are mindful that Outcome/Expected is going to make this problem much worse because if function A calls functions X, Y and Z, each of which return error codes with a custom category, then function A may return an error code of up to four different categories. Each of which may have its own criteria for what success or failure means. This thus turns into a problem of composition. If you have many libraries and many APIs all using custom error code categories, the end user is currently likely to experience surprise at times through no fault of their own, because some programmer in some layer in some library wrote "if(ec) ..." thinking "if there is an error, then ...". This is what I am trying to avoid, by arguing for a retrofit of error_code to make it fit what programmers think it means, but currently doesn't.
When defining your own API you are free to define your own notion of success or failure. One way would be to define your own error_condition for this (an intended use case for error_condition), but you may also use some other mechanism entirely (indicate failure via exception, return value, etc.). You might like to consider this approach for your own API that wraps NT kernel calls.
Unnecessary, every API returns an outcome::result<T>.
I suspect you may be coming unstuck because (unless I am mistaken) the expected and outcome classes have baked in the assumption that zero means success and non-zero means failure. This isn't the case for error_code itself.
Actually one of the main motivations behind Outcome was to rid the community of ever having to work directly with error_code again. It's highly error prone, very brittle. With Outcome/Expected that ease of unenforced errors caused by error_code is mostly mitigated. BTW, neither Outcome nor Expected have any assumptions about error_code for the simple reason that you can use any Error type you like e.g. std::string. As you'll see in the Outcome v2 review next week, it is in fact common in Outcome v2 to not use error_code for E, but rather a custom type with custom payload lazy convertible to an error_code. Both Expected and Outcome implement strict success/failure semantics, not the mishap prone wooliness of error_code. It's been a big win over the current practice. Outcome also eliminates the dual-API problem in Filesystem. We'll see if the community likes it (it was the v1 review which suggested the approach).
My intended proposal for WG21 is that when you compile code with C++ 23 or later, the new string_view signature takes effect. If you compile with C++ 20 or earlier, the string signature remains.
The use of string_view is a non-starter. It has unclear ownership semantics
I don't get the mental block people are having on this. The ownership obviously is by the error category from which it was sourced, so the storage needs to live as long as there is any instance of the category anywhere in the process. For most categories, it's a static const string table anyway living in the const part of the compiled binary.
and does not cater to error sources that compose their messages on demand (e.g. messages that are obtained from another API function, read from a file-based catalog, or constructed based on an error code value that happens to be a bitmask). Static hashmap cache.
Yes it would be nice to find a solution for error messages in freestanding environments that lack std::string, but string_view isn't it.
I'm not so bothered about the freestanding environment issue personally.
I am bothered about the approx 2 seconds per compiland that including
<string> adds if no other STL is being included. In a million file
codebase, that's an extra 23 days of compute time per build. It's
unacceptable.
Outcome was very specifically designed to be used in the public
interface header files of million file codebases, so for me eliminating
the include of <string> is highly important.
For the v2 Outcome review end of this week, I will be using