On Wed, Jan 17, 2018 at 1:16 AM, Andrzej Krzemienski via Boost
2018-01-16 19:58 GMT+01:00 Andrey Semashev via Boost
: There's nothing ambuguous about the conversion operator, as it is specified in the standard, and I find the syntax quite intuitive.
You say you find the syntax quite intuitive. But do you also find the semantics intuitive? When you type or read `if(ec)` do you interpret it as "if operation failed" or as "if operation returned code of numeric value zero, regardless of the domain and regardless of whether in this domain zero means success or failure"?
The two interpretations are equivalent for me, because in my code error code of zero always means "success". Why? Because I pick the error values that way (or map them that way, if the values come from an external API) so that they play nicely with `error_code` design and the rest of the code. If we support multiple "success" values or non-zero success values, I would expect "if (err)" to mean "if some error happened". Why? Because that "if" controls error handling, not an arbitrary non-zero result handling. I've never seen or written code that does otherwise. If I want to test for a particular error code, I would write "if (err == x)" or "if (err != x)" - that's the syntax that communicates the intent to check for the particular value, including when `x` happens to be zero (which doesn't really matter, because `x` is always an enum value and never a magic number). You may ask the what is the difference between "if (err)" and "if (err != success)" and it is that the latter only tests for failure in the particular domain. Now, this difference may not be obvious, and that is why I generally avoid writing "if (err != success)". Maybe I'm missing some crucial experience, but so far the convention I described above worked well for me.