https://github.com/psiha/err README.md copy-paste:
---------------------------------------------
err - yet another take on C++ error handling.
We have throw and std::error_code, what more could one possibly want?
---------------------------------------------
std::error_code&co tried to solve or at least alleviate the problem of
overhead of using exceptions (in terms of both source code verbosity and
binary code size&speed) with functions which need to report failures
that need not be 'exceptional' depending on the context of the call
(thereby requiring try-catch blocks that convert exceptions to 'result
codes'). Unfortunately, the way it was finally implemented,
std::error_code did not quite accomplish what it set out to do:
• in terms of runtime efficiency: 'manual'/explicit EH (try-catch
blocks) is no longer needed but the compiler still has to insert hidden
EH as it has to treat the functions in question as still possibly
throwing (in fact the implementation/codegen of functions which use
std::error_code for error reporting becomes even fatter because it has
to contain both return-on-error paths and handle exceptions)
• in terms of source code verbosity: again, try-catch blocks are gone
but separate error_code objects have to be declared and passed to the
desired function.
Enter boost::err: 'result objects' (as opposed to just result or error
codes) which can be examined and branched upon (like with traditional
result codes) or left unexamined to self-throw if they contain a failure
result (i.e. unlike traditional result codes, they cannot be
accidentally ignored). Thanks to latest C++ language features (such as
r-value member function overloads) incorrect usage (that could otherwise
for example lead to a result object destructor throwing during a
stack-unwind) can be disallowed/detected at compile time.
Discussions where the concept (or a similar one) was first concieved of
and/or so far discussed:
•
http://boost.2283326.n4.nabble.com/system-filesystem-v3-Question-about-error...
• http://cpptips.com/fallible
• https://github.com/ptal/std-expected-proposal
• https://github.com/ptal/expected
•
https://github.com/adityaramesh/ccbase/blob/master/include/ccbase/error/expe...
•
http://stackoverflow.com/questions/14923346/how-would-you-use-alexandrescus-...
•
https://svn.boost.org/trac/boost/wiki/BestPracticeHandbook#a8.DESIGN:Strongl...
• http://2013.cppnow.org/session/non-allocating-stdfuturepromise
****************************************************************
For starter's, to reduce the length of the first post, I'll assume that
everyone as at least somewhat familiar with similar/related proposals
from Andrei Alexandrescu (std::expected) and Niall Douglas (std::monad)
to which I gave links above.
What, primarily, makes (Boost.)Err different from other proposals is the
ability to (using latest C++ features) detect/distinguish between
temporaries and 'saved' return values by using two different class
templates (result wrappers), fallible_result (for rvalues) and
result_or_error (for lvalues) and thus minimise or often completely
eliminate any extra verbosity:
If we had a function foo() that produces bar_t objects but can fail with
an err_t, up till now we had two options:
* bar_t foo() throw( err_t );
* optional
On 17/11/2015 13:45, Domagoj Saric wrote:
Err enables the library writer to write a single API and a single implementation: err::fallible_result
foo(); where fallible_result is the class template that wraps temporaries/rvalues returned from functions and its member functions are all declared with && (i.e. callable only on rvalues) so you get a compiler error if you save it to an auto value and try to do anything with it. The two exceptions are the implicit conversion operators to: - bar_t, which will either return bar_t or throw err_t and which is used for the 'EH code path' bar_t my_bar( foo() ); - err::result_or_error which is used for the 'oldsk00l nothrow error code path' err::result_or_error maybe_bar( foo ); if ( maybe_bar ) { print( *maybe_bar ); } else { log( maybe_bar.error() ); }
Nitpicking on this, now that we're in an auto world it seems like a step backwards to introduce an API that relies on the declared type of a variable to alter behaviour. There should be methods on fallible_result that return these values explicitly instead.
Also, if the fallible_result rvalue is left uninspected and contains an error its destructor will throw (which AFAICT should be safe considering it is an rvalue and no other exception can possibly be active at the same time)
Throwing from a destructor can cause abrupt termination (or undefined behaviour in some compilers); there are no conditions in which it should be considered "safe". Additionally I'm not convinced that no exception can be active at the same time. Consider function call parameters -- you could easily have one of these get constructed but not consumed yet, and then another parameter throws an exception, resulting in destruction of your object during the throw of an exception.
On 17.11.2015. 4:28, Gavin Lambert wrote:
On 17/11/2015 13:45, Domagoj Saric wrote:
Err enables the library writer to write a single API and a single implementation: err::fallible_result
foo(); where fallible_result is the class template that wraps temporaries/rvalues returned from functions and its member functions are all declared with && (i.e. callable only on rvalues) so you get a compiler error if you save it to an auto value and try to do anything with it. The two exceptions are the implicit conversion operators to: - bar_t, which will either return bar_t or throw err_t and which is used for the 'EH code path' bar_t my_bar( foo() ); - err::result_or_error which is used for the 'oldsk00l nothrow error code path' err::result_or_error maybe_bar( foo ); if ( maybe_bar ) { print( *maybe_bar ); } else { log( maybe_bar.error() ); } Nitpicking on this, now that we're in an auto world it seems like a step backwards to introduce an API that relies on the declared type of a variable to alter behaviour. There should be methods on fallible_result that return these values explicitly instead.
Of course those methods exists. There was a lot more thought invested in the library both WRT the API and codegen then 'meets the first post' - which was more about the idea/principle itself - so I avoided 'spamming' it with details...i.e. even if such a method did not exist there is nothing stopping us from adding those ;)
Also, if the fallible_result rvalue is left uninspected and contains an error its destructor will throw (which AFAICT should be safe considering it is an rvalue and no other exception can possibly be active at the same time)
Throwing from a destructor can cause abrupt termination (or undefined behaviour in some compilers); there are no conditions in which it should be considered "safe".
That's not mandated by the standard...that would be a broken compiler/standard library...
Additionally I'm not convinced that no exception can be active at the same time. Consider function call parameters -- you could easily have one of these get constructed but not consumed yet, and then another parameter throws an exception, resulting in destruction of your object during the throw of an exception.
That can't happen: fallible_results are not meant for passing
parameters, and thanks to the rvalue semantics this is enforced by the
compiler rather than just by convention - even if you go to some lengths
to declare a function as taking a fallible_result (i.e. completely
contrary to what the type is for)
foo( fallible_result
On 17/11/2015 21:06, Domagoj Saric wrote:
Also, if the fallible_result rvalue is left uninspected and contains an error its destructor will throw (which AFAICT should be safe considering it is an rvalue and no other exception can possibly be active at the same time)
Throwing from a destructor can cause abrupt termination (or undefined behaviour in some compilers); there are no conditions in which it should be considered "safe".
That's not mandated by the standard...that would be a broken compiler/standard library...
It's mandated by the standard if another exception is already in flight.
Additionally I'm not convinced that no exception can be active at the same time. Consider function call parameters -- you could easily have one of these get constructed but not consumed yet, and then another parameter throws an exception, resulting in destruction of your object during the throw of an exception.
That can't happen: fallible_results are not meant for passing parameters, and thanks to the rvalue semantics this is enforced by the compiler rather than just by convention - even if you go to some lengths to declare a function as taking a fallible_result (i.e. completely contrary to what the type is for) foo( fallible_result
bar ); (instead of foo( bar ); ) you'll soon discover that you are doing something wrong because the compiler won't let you do anything with that bar... (and if the standard would let me declare a special && destructor for rvalues there'd be no end to happiness, i.e. you'd get an error immediately, at the declaration, and even better codegen in somecases...)
Declaring a parameter as fallible_result<T>&& will satisfy the compiler, and this doesn't seem unreasonable if it's being passed to a helper method intended to process such results. Imagine a function that calculates several results, some of which may fail, and then calls a combiner method to choose the "best" non-failing result of the set. Wouldn't the combiner naturally expect a fallible_result<T>&& as its inputs, since it wants to consume them? I suppose you could force it to accept result_or_error<T> instead, but since this isn't the return type of the function it inhibits using generic type-deducing template code with simplified expressions like: return combiner(calcA(), calcB(), calcC()); The more problematic case is if the combiner was not expecting failure, and so someone used the same expression with a combiner that accepted T. So the compiler calls all three calc methods (constructing fallible_result<T>s along the way), then gets around to converting the first one back to T, which throws. This is ok, but then the other two are destroyed; and if at least one of these throws too, then you're dead. Sure, you can tell people to only use this as a standalone expression and not as a function parameter, but this seems very fragile. People like to inline things, especially rvalues.
Le 18/11/2015 06:49, Gavin Lambert a écrit :
On 17/11/2015 21:06, Domagoj Saric wrote:
Also, if the fallible_result rvalue is left uninspected and contains an error its destructor will throw (which AFAICT should be safe considering it is an rvalue and no other exception can possibly be active at the same time)
Throwing from a destructor can cause abrupt termination (or undefined behaviour in some compilers); there are no conditions in which it should be considered "safe".
That's not mandated by the standard...that would be a broken compiler/standard library...
It's mandated by the standard if another exception is already in flight.
Additionally I'm not convinced that no exception can be active at the same time. Consider function call parameters -- you could easily have one of these get constructed but not consumed yet, and then another parameter throws an exception, resulting in destruction of your object during the throw of an exception.
That can't happen: fallible_results are not meant for passing parameters, and thanks to the rvalue semantics this is enforced by the compiler rather than just by convention - even if you go to some lengths to declare a function as taking a fallible_result (i.e. completely contrary to what the type is for) foo( fallible_result
bar ); (instead of foo( bar ); ) you'll soon discover that you are doing something wrong because the compiler won't let you do anything with that bar... (and if the standard would let me declare a special && destructor for rvalues there'd be no end to happiness, i.e. you'd get an error immediately, at the declaration, and even better codegen in somecases...) Declaring a parameter as fallible_result<T>&& will satisfy the compiler, and this doesn't seem unreasonable if it's being passed to a helper method intended to process such results.
Imagine a function that calculates several results, some of which may fail, and then calls a combiner method to choose the "best" non-failing result of the set. Wouldn't the combiner naturally expect a fallible_result<T>&& as its inputs, since it wants to consume them?
I suppose you could force it to accept result_or_error<T> instead, but since this isn't the return type of the function it inhibits using generic type-deducing template code with simplified expressions like: return combiner(calcA(), calcB(), calcC());
The more problematic case is if the combiner was not expecting failure, and so someone used the same expression with a combiner that accepted T. So the compiler calls all three calc methods (constructing fallible_result<T>s along the way), then gets around to converting the first one back to T, which throws. This is ok, but then the other two are destroyed; and if at least one of these throws too, then you're dead.
Sure, you can tell people to only use this as a standalone expression and not as a function parameter, but this seems very fragile. People like to inline things, especially rvalues.
Your example can be adapted to several variables of type faillible_result<T> auto a = calcA(); auto b = calcB(); auto c = calcC(); if (a.value()) // can throw :( I like the intent of faillible_result<T>, but as you described it could be more dangerous than safe. Thanks for reporting this case. It helps me a lot. Vicente
On 19/11/2015 12:43, Vicente J. Botet Escriba wrote:
Imagine a function that calculates several results, some of which may fail, and then calls a combiner method to choose the "best" non-failing result of the set. Wouldn't the combiner naturally expect a fallible_result<T>&& as its inputs, since it wants to consume them?
I suppose you could force it to accept result_or_error<T> instead, but since this isn't the return type of the function it inhibits using generic type-deducing template code with simplified expressions like: return combiner(calcA(), calcB(), calcC());
The more problematic case is if the combiner was not expecting failure, and so someone used the same expression with a combiner that accepted T. So the compiler calls all three calc methods (constructing fallible_result<T>s along the way), then gets around to converting the first one back to T, which throws. This is ok, but then the other two are destroyed; and if at least one of these throws too, then you're dead.
Sure, you can tell people to only use this as a standalone expression and not as a function parameter, but this seems very fragile. People like to inline things, especially rvalues.
Your example can be adapted to several variables of type faillible_result<T>
auto a = calcA(); auto b = calcB(); auto c = calcC();
if (a.value()) // can throw :(
I like the intent of faillible_result<T>, but as you described it could be more dangerous than safe.
That's sort of what I meant in my original post about requiring an explicitly different type for variable declarations instead of using auto. Although correct me if I'm wrong, but given: fallible_result<T>&& calcA(); auto a = calcA(); Isn't the type of "a" fallible_result<T> rather than fallible_result<T>&&? Theoretically the author made it unusable in this case since the methods only work on rvalue references (although BTW the syntax to do this is illegal in VS2013, which is what I have handy ATM, so I couldn't verify that this actually generates an error). It also asserts if more than one fallible_result exists on the thread, which would catch the case that I mentioned too. Though asserts only help you in debug builds, of course. (This probably also means that it's a bad idea to have a coroutine suspension point between calling such a function and resolving the fallible_result...)
On Thu, 19 Nov 2015 07:58:53 +0530, Gavin Lambert
It also asserts if more than one fallible_result exists on the thread, which would catch the case that I mentioned too. Though asserts only help you in debug builds, of course. (This probably also means that it's a bad idea to have a coroutine suspension point between calling such a function and resolving the fallible_result...)
I'm honestly pretty ignorant WRT coroutines and so they haven't crossed my mind...I'll have to examine any possible problems there after I get back to my 'normal working conditions' in about 3 weeks... ...'though up front, I suspect that a std::exception_ptr like solution (if a problem really does exist there) could probably be devised (i.e. something similar as for transferring exceptions across threads)... -- "What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate." Neil Postman
On Thu, 19 Nov 2015 05:13:47 +0530, Vicente J. Botet Escriba
Your example can be adapted to several variables of type faillible_result<T>
auto a = calcA(); auto b = calcB(); auto c = calcC();
if (a.value()) // can throw :(
This example was used in one of the previous discussion on other similar proposals I listed in my opening post...It does not apply here as it will cause a compiler error... The only similar 'loop hole' would be if you created a, b and c as above but did not touch them and never tested with asserts enabled (as Gavin noticed) but, besides being 'obviously wrong', it is also rather convoluted... Finally, as I mentioned earlier, even this case would be deterministically caught at compile time if && destructors were added to the language (and I see no reason why they shouldn't be). ...which brings us a bit more into 'detail' land :) where I'd like to 'state' one more, related, thing I miss from the langauge: 'template parametrizable' attributes (like noexcept) that would tell the compiler for example: - that after a function is called the object is in a destroyed-like state (i.e. the destructor does not need to be called) - the obvious use is to prevent useless destructor calls after the object has been moved - that an object is pod-like/safe to pass in registers even though it has a copy constructor these would greatly help with implementation complexity and codegen quality in essentially-wrapper libraries like err or optional... -- "What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate." Neil Postman
On Wed, 18 Nov 2015 11:19:30 +0530, Gavin Lambert
Throwing from a destructor can cause abrupt termination (or undefined behaviour in some compilers); there are no conditions in which it should be considered "safe".
That's not mandated by the standard...that would be a broken compiler/standard library...
It's mandated by the standard if another exception is already in flight.
I thought you meant in the general case as you separately mentioned the case of 'active exceptions' in the following paragraph...so I'll address this there...
Additionally I'm not convinced that no exception can be active at the same time. Consider function call parameters -- you could easily have one of these get constructed but not consumed yet, and then another parameter throws an exception, resulting in destruction of your object during the throw of an exception.
That can't happen: fallible_results are not meant for passing parameters, and thanks to the rvalue semantics this is enforced by the compiler rather than just by convention - even if you go to some lengths to declare a function as taking a fallible_result (i.e. completely contrary to what the type is for) foo( fallible_result
bar ); (instead of foo( bar ); ) you'll soon discover that you are doing something wrong because the compiler won't let you do anything with that bar... (and if the standard would let me declare a special && destructor for rvalues there'd be no end to happiness, i.e. you'd get an error immediately, at the declaration, and even better codegen in somecases...) Declaring a parameter as fallible_result<T>&& will satisfy the compiler,
Actually, due to that new dark corner of the language (still somewhat dark to me too) called 'reference collapsing rules', that won't make much of a difference: even if you capture/bind an rvalue to a && it is no longer an rvalue (my 'non formal' understanding) and && member function overloads can only be called on rvalues...
and this doesn't seem unreasonable if it's being passed to a helper method intended to process such results.
Helper methods are supposed to know the semantics of types they work for?
Imagine a function that calculates several results, some of which may fail, and then calls a combiner method to choose the "best" non-failing result of the set. Wouldn't the combiner naturally expect a fallible_result<T>&& as its inputs, since it wants to consume them?
No, as you say below it should expect result_or_error, the documented type to be used for 'saved' (i.e. to be inspected after the expression that generated them) results...
I suppose you could force it to accept result_or_error<T> instead, but since this isn't the return type of the function it inhibits using generic type-deducing template code with simplified expressions like: return combiner(calcA(), calcB(), calcC());
It inhibits it no more than the addition of other wrappers, like smart pointers or optionals, does... These are usually handled with 'interface normalization' global functions like get_ptr()...likewise boost::err could add get_result(), get_result_or_error(), has_result(), has_succeeded() or others along those lines...
The more problematic case is if the combiner was not expecting failure, and so someone used the same expression with a combiner that accepted T. So the compiler calls all three calc methods (constructing fallible_result<T>s along the way), then gets around to converting the first one back to T, which throws. This is ok, but then the other two are destroyed; and if at least one of these throws too, then you're dead.
As explained before this cannot happen as even the implicit conversion operators work only on rvalues...
Sure, you can tell people to only use this as a standalone expression and not as a function parameter, but this seems very fragile. People like to inline things, especially rvalues.
AFAICT I think I've answered the, so far laid out 'fragility' objections... -- "What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate." Neil Postman
On 20/11/2015 04:58, Domagoj Šarić wrote:
The more problematic case is if the combiner was not expecting failure, and so someone used the same expression with a combiner that accepted T. So the compiler calls all three calc methods (constructing fallible_result<T>s along the way), then gets around to converting the first one back to T, which throws. This is ok, but then the other two are destroyed; and if at least one of these throws too, then you're dead.
As explained before this cannot happen as even the implicit conversion operators work only on rvalues...
The result of a function call that returns either a bare T or a T&& is an rvalue. Your asserts will prevent this particular usage, but that's the only thing that does. And asserts don't fire until runtime, so if it's an infrequently exercised path (without a unit test) this may go unnoticed for quite a while. Especially if people are in the habit of testing release builds (which is not that uncommon).
"On Fri, 20 Nov 2015 03:36:54 +0530, Gavin Lambert
On 20/11/2015 04:58, Domagoj Šarić wrote:
The more problematic case is if the combiner was not expecting failure, and so someone used the same expression with a combiner that accepted T. So the compiler calls all three calc methods (constructing fallible_result<T>s along the way), then gets around to converting the first one back to T, which throws. This is ok, but then the other two are destroyed; and if at least one of these throws too, then you're dead.
As explained before this cannot happen as even the implicit conversion operators work only on rvalues...
The result of a function call that returns either a bare T or a T&& is an rvalue.
Your asserts will prevent this particular usage, but that's the only thing that does. And asserts don't fire until runtime, so if it's an infrequently exercised path (without a unit test) this may go unnoticed for quite a while. Especially if people are in the habit of testing release builds (which is not that uncommon).
Sorry, I misunderstood you...thought you were talking about converting to Ts _inside_ the combiner function... Hm..right I missed that this is actually a shady area in the case of multiple function parameters...probably because I just assumed that, even though the order of computation of individual parameter values is unspecified, each value would be fully computed before the compiler moves onto the next one...Trying to see what the standard actually says (n3797 draft @ 1.9.15): "When calling a function (whether or not the function is inline), every value computation and side effect associated with any argument expression, or with the postfix expression designating the called function, is sequenced before execution of every expression or statement in the body of the called function. [Note: Value computations and side effects associated with different argument expressions are unsequenced. —end note]"[1] -> this end note might be interpreted as implying the negative, i.e. that value computations and side effects associated with _same_ argument expressions are _not_ unsequenced ... IOW that operations/instructions producing the value of parameter1 may not be interleaved with those producing the value of parameter2 (although the order whether parameter1 or parameter2 is produced first is left unspecified). I may very well be completely mistaken in this amateur 'exegesis' but, even if I am wrong and 'complete/non-interleaved' parameter values computation is not guaranteed fallible_result<> can still be made to work with the desired or 'good enough' [2] semantics by allowing multiple fallible_results to exist (removing the related asserts) and inserting an if (!std::uncaught_exception()) check before throwing (Scot Meyers' std::uncaught_exception() related gotw does not apply here). I can then improve the debugging logic to assert that at least one of the multiple fallible_results was inspected before leaving the current scope thereby still catching the contrived case of unexamined/untouched/unused local fallible_result variables 'hidden&forgotten' in autos. I say contrived because this is still better than what you currently have with 'regular' return codes, i.e. you don't get even an assertion for unexamined results. You do get compiler specific warnings about unused variables but you'd get those just as well for fallible results [3]. [1] Just earlier than that paragraph we also read: "If a side effect on a scalar object is unsequenced relative to either another side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined." ...which specifically talks about scalars w/o explaining (AFAICT) the difference WRT other types of objects -> FWIW/if this can be interpreted in the 'inverse meaning' (i.e. to mean that the described UB does not apply to classes i.e. fallible_results)... [2] Sure this means that construction of other parameters can proceed even if a previous one has already failed but this is actually not that different from what you have to expect anyway precisely because of the undefined order in which parameters are constructed... [3] Compilers usually omit this warning for types with nontrivial destructors (e.g. guard objects) but some compilers offer a function attribute that will issue a warning if the function's result is unexamined in all cases. In any case/as said before, all this goes away when/if && destructors are added to the language (all wrong usage is detected at compile time)...
Yes, within the function that gets called the && reference parameter is an lvalue, not an rvalue, since it has a name. But all it takes to make it an rvalue again is a call to std::move.
And this does not seem like unreasonable behaviour in itself, if that only occurs once (perhaps to construct the result_or_error within the function).
I'm not sure what you are getting at here...
Even this is illegal (and really verbose) using your current rules:
typedef fallible_result
fallible_foo_t; typedef result_or_error foo_error_t; fallible_foo_t calcA(); fallible_foo_t calcB(); fallible_foo_t combiner(foo_error_t result1, foo_error_t result2);
{ ... foo_error_t r = combiner(calcA(), calcB()); ... }
It doesn't seem like it should be, but it is. It's even still illegal if you write calcA().as_result_or_error() explicitly in the parameters.
True but that's a mere omission in the debugging code on my side and easily fixable - I just have to change fallible_result to consider itself inactive, i.e. decrease its debugging refcount, once/as soon as it is 'inspected' as opposed to waiting until it is destroyed - or simply replace this assert (as it was already shown to be too simple by your previous/above more generic example with multiple parameters - IOW the solution presented for that problem would also fix this one). As for verbosity, remember to compare this to the 'competition', i.e. classic error codes, std::error_code and local try-catch blocks...I've primarily tried to eliminate any extra verbosity in the 'EH mode/style': that's why fallible_result operators * and -> return T - I could change the * operator (or add operator ()) to return result_or_error (eliminating the need for the lengthy 'as_result_or_error' which OTOH can also be renamed to something shorter)...
The only way it works is to explicitly separate out the calc calls, either as:
foo_error_t a = calcA(); foo_error_t b = calcB();
or as:
auto a = calcA().as_result_or_error(); auto b = calcB().as_result_or_error();
(and woe betide you if you accidentally use "auto" in the first case)
What woe? You'd get a compiler error if you 'used auto in the first case' and tried to send a and b to 'combiner' (or pretty much do anything else with them)... -- "What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate." Neil Postman --- Ova e-pošta je provjerena na viruse Avast protuvirusnim programom. https://www.avast.com/antivirus
On 27/11/2015 06:18, Domagoj Šarić wrote:
Sorry, I misunderstood you...thought you were talking about converting to Ts _inside_ the combiner function... Hm..right I missed that this is actually a shady area in the case of multiple function parameters...probably because I just assumed that, even though the order of computation of individual parameter values is unspecified, each value would be fully computed before the compiler moves onto the next one...Trying to see what the standard actually says (n3797 draft @ 1.9.15): "When calling a function (whether or not the function is inline), every value computation and side effect associated with any argument expression, or with the postfix expression designating the called function, is sequenced before execution of every expression or statement in the body of the called function. [Note: Value computations and side effects associated with different argument expressions are unsequenced. —end note]"[1] -> this end note might be interpreted as implying the negative, i.e. that value computations and side effects associated with _same_ argument expressions are _not_ unsequenced ... IOW that operations/instructions producing the value of parameter1 may not be interleaved with those producing the value of parameter2 (although the order whether parameter1 or parameter2 is produced first is left unspecified).
No, that simply says that all parameters must be fully evaluated before the first instruction of the called function executes. It does not say anything about the sequencing of the parameter evaluation prior to the actual call. As far as I am aware, function parameters may be evaluated in any order, including decomposed orders. By that I mean that in the expression f(a.b, c().d()), then you are guaranteed that a is evaluated before b and c() before d(), and both a.b and c().d() before *the actual call of f*, but you have no guarantees about the order of the evaluation of f to a method pointer vs. a vs. c(). So the compiler is perfectly free to evaluate c() first, then a, then f, then b, then d(), and then finally call the method that f evaluated to. Or several other such combinations.
I may very well be completely mistaken in this amateur 'exegesis' but, even if I am wrong and 'complete/non-interleaved' parameter values computation is not guaranteed fallible_result<> can still be made to work with the desired or 'good enough' [2] semantics by allowing multiple fallible_results to exist (removing the related asserts) and inserting an if (!std::uncaught_exception()) check before throwing (Scot Meyers' std::uncaught_exception() related gotw does not apply here). I can then improve the debugging logic to assert that at least one of the multiple fallible_results was inspected before leaving the current scope thereby still catching the contrived case of unexamined/untouched/unused local fallible_result variables 'hidden&forgotten' in autos. I say contrived because this is still better than what you currently have with 'regular' return codes, i.e. you don't get even an assertion for unexamined results. You do get compiler specific warnings about unused variables but you'd get those just as well for fallible results [3].
Yes, that would probably be an improvement.
Yes, within the function that gets called the && reference parameter is an lvalue, not an rvalue, since it has a name. But all it takes to make it an rvalue again is a call to std::move.
And this does not seem like unreasonable behaviour in itself, if that only occurs once (perhaps to construct the result_or_error within the function).
I'm not sure what you are getting at here...
I was referring to the hypothetical combiner function that accepted multiple fallible_result<>&& parameters (since it consumes all of them), and then returns its own fallible_result from one or more of them. Within the function it will not be able to call any methods on the fallible_result<>&& parameters (since they're actually lvalues now), except for one very common case: auto a = std::move(paramA).as_result_or_error(); (And that this construct is not unreasonable as long as paramA is not accessed after this point -- it's no different from any other move.) I was also referring to the compiler errors that would be generated from "improper use" being the same ones that people are likely to reflexively add std::move() calls to resolve.
The only way it works is to explicitly separate out the calc calls, either as:
foo_error_t a = calcA(); foo_error_t b = calcB();
or as:
auto a = calcA().as_result_or_error(); auto b = calcB().as_result_or_error();
(and woe betide you if you accidentally use "auto" in the first case)
What woe? You'd get a compiler error if you 'used auto in the first case' and tried to send a and b to 'combiner' (or pretty much do anything else with them)...
The woe is that you'd also get a runtime assert since two fallible_results managed to exist at the same time again. But yes, that could only happen if you forgot to actually use them, or if you thought the errors meant that you were supposed to add std::move(). It does make the behaviour a little strange for the various cases though: calcA(); // throws because fallible_result went uninspected calcB(); -------------- auto a = calcA(); auto b = calcB(); // asserts because two fallible_results exist // compiler error if a or b are used later without std::move -------------- foo_error_t a = calcA(); foo_error_t b = calcB(); // no errors even if a or b go unused after this point // a & b are either valid or have error codes -------------- foo_t a = calcA(); // throws if calcA has an error foo_t b = calcB(); // throws if calcB has an error // a & b are both valid if you survive this far -------------- auto a = calcA().as_result_or_error(); auto b = calcB().as_result_or_error(); // no errors even if a or b go unused after this point // a & b are either valid or have error codes -------------- something(calcA(), calcB()); // asserts for two fallible_results -------------- something(calcA().as_result_or_error(), calcB().as_result_or_error()); // might work or might assert depending on the compiler's mood
On Fri, 27 Nov 2015 04:56:32 +0530, Gavin Lambert
I was referring to the hypothetical combiner function that accepted multiple fallible_result<>&& parameters (since it consumes all of them), and then returns its own fallible_result from one or more of them.
Within the function it will not be able to call any methods on the fallible_result<>&& parameters (since they're actually lvalues now), except for one very common case: auto a = std::move(paramA).as_result_or_error();
(And that this construct is not unreasonable as long as paramA is not accessed after this point -- it's no different from any other move.)
Since std::move is meant for, well, moving and not for accessing special functionality (at least in client code) I would consider this 'smelly'/unreasonable/'antipatternistic'...regardless of that, even if you do do this everything will still work (with the recently discussed changes) although with possibly slight codegen degeneration.
I was also referring to the compiler errors that would be generated from "improper use" being the same ones that people are likely to reflexively add std::move() calls to resolve.
Well that would mean that 'people' didn't 'RTM' i.e. they don't know what both fallible_result and std::move are for... You could just as well argue that you can circumvent RAII with placement new...the point is that you cannot unintentionally do the wrong thing...
The woe is that you'd also get a runtime assert since two fallible_results managed to exist at the same time again. But yes, that could only happen if you forgot to actually use them, or if you thought the errors meant that you were supposed to add std::move().
As above...
It does make the behaviour a little strange for the various cases though:
calcA(); // throws because fallible_result went uninspected calcB();
That's intended/by design behaviour, 'the classic EH style usage' (the
throw happens only if the call failed, i.e. the return value contains an
err_t, not T, so the err_t is thrown, with a possible transformation into
an object/wrapper more apt for EH but those are details I'd rather not go
at this stage)...what problem do you see here?
ps. if the functions in question would otherwise return void (and would
thus be transformed to return fallible_result
-------------- auto a = calcA(); auto b = calcB(); // asserts because two fallible_results exist // compiler error if a or b are used later without std::move
Again, intended behaviour - error on wrong usage (as there is no way for
the compiler and/or library to deduce whether you want the auto to mean T
or result_or_error
-------------- foo_error_t a = calcA(); foo_error_t b = calcB(); // no errors even if a or b go unused after this point // a & b are either valid or have error codes
Intended behaviour - error 'code' style - this is kind of like a
generalised concept of the std::pair
-------------- foo_t a = calcA(); // throws if calcA has an error foo_t b = calcB(); // throws if calcB has an error // a & b are both valid if you survive this far
Same/similar to your first example - intended behaviour - EH style...
-------------- auto a = calcA().as_result_or_error(); auto b = calcB().as_result_or_error(); // no errors even if a or b go unused after this point // a & b are either valid or have error codes
-------------- something(calcA(), calcB()); // asserts for two fallible_results
No longer, we fixed those, no? ;)
-------------- something(calcA().as_result_or_error(), calcB().as_result_or_error()); // might work or might assert depending on the compiler's mood
Same as above (fixed). (note: the as_result_or_error verbosity is only required if something() is a function template so implicit conversion does not kick in) -- "What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate." Neil Postman --- Ova e-pošta je provjerena na viruse Avast protuvirusnim programom. https://www.avast.com/antivirus
On 30/11/2015 16:18, Domagoj Šarić wrote:
On Fri, 27 Nov 2015 04:56:32 +0530, Gavin Lambert
wrote: It does make the behaviour a little strange for the various cases though:
calcA(); // throws because fallible_result went uninspected calcB();
That's intended/by design behaviour, 'the classic EH style usage' (the throw happens only if the call failed, i.e. the return value contains an err_t, not T, so the err_t is thrown, with a possible transformation into an object/wrapper more apt for EH but those are details I'd rather not go at this stage)...what problem do you see here?
I'm not calling out the cases individually as problems, I'm trying to point out that the set as a whole seems inconsistent, and somewhat surprising with regard to "auto" and (at least for the original implementation) with the asserts. Maybe I'm just being too nitpicky though. They were intended to cover all usages, though I know there are some that I missed (such as use of operator*).
On 30.11.2015. 4:45, Gavin Lambert wrote:
On 30/11/2015 16:18, Domagoj Šarić wrote:
On Fri, 27 Nov 2015 04:56:32 +0530, Gavin Lambert
wrote: It does make the behaviour a little strange for the various cases though: <snip> That's intended/by design behaviour <snip> I'm not calling out the cases individually as problems, I'm trying to point out that the set as a whole seems inconsistent, and somewhat surprising with regard to "auto" and (at least for the original implementation) with the asserts. Maybe I'm just being too nitpicky though.
Hi Gavin, thanks for all the valuable feedback and this late of a response... I've since fixed the issues that were brought up (notably the asserts/sanity checks). Could you please restate/elaborate on where do you see inconsistencies in the design (or perhaps the idea itself)? The problem of making auto 'less nice' is I suspect not 'fully solveable' but it is to a 'significant' degree (e.g. by the use of the, existing, operator* or some possible use of operator | as is done by the Boost.Range library with its adaptors)... ps. OT/'to whom it may concern': fixing the 'too strong assertions' problem (allowing multiple fallible_results to exist) and making it work on Android (where we still don't have even proper 'POD thread locals') with Clang forced me to reinvent boost::thread_specific_ptr (to avoid a dependency on Boost.Thread) in the process of which I found that Boost.Thread only asserts/'verifies' that calls to pthread_key_create() and pthread_setspecific() succeeded (which my fail with ENOMEM)... -- "What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate." Neil Postman
Le 10/01/2016 00:02, Domagoj Saric a écrit :
ps. OT/'to whom it may concern': fixing the 'too strong assertions' problem (allowing multiple fallible_results to exist) and making it work on Android (where we still don't have even proper 'POD thread locals') with Clang forced me to reinvent boost::thread_specific_ptr (to avoid a dependency on Boost.Thread) in the process of which I found that Boost.Thread only asserts/'verifies' that calls to pthread_key_create() and pthread_setspecific() succeeded (which my fail with ENOMEM)...
Hi, Please, create a Trac ticket so we don't forget it (or a github issue if you prefer). Ah, if you have a patch it will be welcome also :) Best, Vicente
On 10.1.2016. 0:45, Vicente J. Botet Escriba wrote:
Le 10/01/2016 00:02, Domagoj Saric a écrit :
ps. OT/'to whom it may concern': fixing the 'too strong assertions' problem (allowing multiple fallible_results to exist) and making it work on Android (where we still don't have even proper 'POD thread locals') with Clang forced me to reinvent boost::thread_specific_ptr (to avoid a dependency on Boost.Thread) in the process of which I found that Boost.Thread only asserts/'verifies' that calls to pthread_key_create() and pthread_setspecific() succeeded (which my fail with ENOMEM)...
Hi,
Please, create a Trac ticket so we don't forget it (or a github issue if you prefer).
https://svn.boost.org/trac/boost/ticket/11903 unfortunately I forgot to login before submitting so it's anonymous...
Ah, if you have a patch it will be welcome also :)
No, as I don't use Boost.Thread...'we are now getting deeper into OT but': if you fix this 'conventonally' by throwing exceptions that will make the TLS helper functions no longer nothrow which brings me to a related dillema I had with the C++11 thread_local keyword. Checking the latest draft of the standard I could not figure out what are the guarantees, if any, about thread_local WRT to (its) resource/storage allocation/construction. AFAIK, with proper support from the loader and the OS, it is possible to implement C++11 TLS (including function thread_local statics) so that thread_local storage is allocated on thread creation meaning that if the thread succesfully starts all thread_local storage is already preallocated...And, as I said, I could not figure out whether the standard assumes this or not, and if not, how is storage allocation failure reported...with std::bad_alloc? and is a try-catch around a function local static thread_local enough to catch it? -- "What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate." Neil Postman
Le 11/01/2016 22:13, Domagoj Saric a écrit :
On 10.1.2016. 0:45, Vicente J. Botet Escriba wrote:
Le 10/01/2016 00:02, Domagoj Saric a écrit :
ps. OT/'to whom it may concern': fixing the 'too strong assertions' problem (allowing multiple fallible_results to exist) and making it work on Android (where we still don't have even proper 'POD thread locals') with Clang forced me to reinvent boost::thread_specific_ptr (to avoid a dependency on Boost.Thread) in the process of which I found that Boost.Thread only asserts/'verifies' that calls to pthread_key_create() and pthread_setspecific() succeeded (which my fail with ENOMEM)...
Hi,
Please, create a Trac ticket so we don't forget it (or a github issue if you prefer).
https://svn.boost.org/trac/boost/ticket/11903 unfortunately I forgot to login before submitting so it's anonymous...
Thanks and no problem.
Ah, if you have a patch it will be welcome also :)
No, as I don't use Boost.Thread...'we are now getting deeper into OT but': if you fix this 'conventonally' by throwing exceptions that will make the TLS helper functions no longer nothrow which brings me to a related dillema I had with the C++11 thread_local keyword. Checking the latest draft of the standard I could not figure out what are the guarantees, if any, about thread_local WRT to (its) resource/storage allocation/construction. AFAIK, with proper support from the loader and the OS, it is possible to implement C++11 TLS (including function thread_local statics) so that thread_local storage is allocated on thread creation meaning that if the thread succesfully starts all thread_local storage is already preallocated...And, as I said, I could not figure out whether the standard assumes this or not, and if not, how is storage allocation failure reported...with std::bad_alloc? and is a try-catch around a function local static thread_local enough to catch it?
The best will be to ask in std-discussion :) I would say that thread_local don't need any allocation (I would store them on the stack). Vicente
On Fri, 27 Nov 2015 04:56:32 +0530, Gavin Lambert
I was referring to the hypothetical combiner function that accepted multiple fallible_result<>&& parameters (since it consumes all of them), and then returns its own fallible_result from one or more of them.
Within the function it will not be able to call any methods on the fallible_result<>&& parameters (since they're actually lvalues now), except for one very common case: auto a = std::move(paramA).as_result_or_error();
(And that this construct is not unreasonable as long as paramA is not accessed after this point -- it's no different from any other move.)
Since std::move is meant for, well, moving and not for accessing special functionality (at least in client code) I would consider this 'smelly'/unreasonable/'antipatternistic'...regardless of that, even if you do do this everything will still work (with the recently discussed changes) although with possibly slight codegen degeneration.
I was also referring to the compiler errors that would be generated from "improper use" being the same ones that people are likely to reflexively add std::move() calls to resolve.
Well that would mean that 'people' didn't 'RTM' i.e. they don't know what both fallible_result and std::move are for... You could just as well argue that you can circumvent RAII with placement new...the point is that you cannot unintentionally do the wrong thing...
The woe is that you'd also get a runtime assert since two fallible_results managed to exist at the same time again. But yes, that could only happen if you forgot to actually use them, or if you thought the errors meant that you were supposed to add std::move().
As above...
It does make the behaviour a little strange for the various cases though:
calcA(); // throws because fallible_result went uninspected calcB();
That's intended/by design behaviour, 'the classic EH style usage' (the
throw happens only if the call failed, i.e. the return value contains an
err_t, not T, so the err_t is thrown, with a possible transformation into
an object/wrapper more apt for EH but those are details I'd rather not go
at this stage)...what problem do you see here?
ps. if the functions in question would otherwise return void (and would
thus be transformed to return fallible_result
-------------- auto a = calcA(); auto b = calcB(); // asserts because two fallible_results exist // compiler error if a or b are used later without std::move
Again, intended behaviour - error on wrong usage (as there is no way for
the compiler and/or library to deduce whether you want the auto to mean T
or result_or_error
-------------- foo_error_t a = calcA(); foo_error_t b = calcB(); // no errors even if a or b go unused after this point // a & b are either valid or have error codes
Intended behaviour - error 'code' style - this is kind of like a
generalised concept of the std::pair
-------------- foo_t a = calcA(); // throws if calcA has an error foo_t b = calcB(); // throws if calcB has an error // a & b are both valid if you survive this far
Same/similar to your first example - intended behaviour - EH style...
-------------- auto a = calcA().as_result_or_error(); auto b = calcB().as_result_or_error(); // no errors even if a or b go unused after this point // a & b are either valid or have error codes
-------------- something(calcA(), calcB()); // asserts for two fallible_results
No longer, we fixed those, no? ;)
-------------- something(calcA().as_result_or_error(), calcB().as_result_or_error()); // might work or might assert depending on the compiler's mood
Same as above (fixed). (note: the as_result_or_error verbosity is only required if something() is a function template so implicit conversion does not kick in) -- "What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate." Neil Postman --- Ova e-pošta je provjerena na viruse Avast protuvirusnim programom. https://www.avast.com/antivirus
Sorry, accidentally pushed Send too early on my other reply. On 20/11/2015 04:58, Domagoj Šarić wrote:
Actually, due to that new dark corner of the language (still somewhat dark to me too) called 'reference collapsing rules', that won't make much of a difference: even if you capture/bind an rvalue to a && it is no longer an rvalue (my 'non formal' understanding) and && member function overloads can only be called on rvalues...
Yes, within the function that gets called the && reference parameter is
an lvalue, not an rvalue, since it has a name. But all it takes to make
it an rvalue again is a call to std::move.
And this does not seem like unreasonable behaviour in itself, if that
only occurs once (perhaps to construct the result_or_error within the
function).
Even this is illegal (and really verbose) using your current rules:
typedef fallible_result
Le 17/11/2015 01:45, Domagoj Saric a écrit :
https://github.com/psiha/err README.md copy-paste:
--------------------------------------------- err - yet another take on C++ error handling.
We have throw and std::error_code, what more could one possibly want? ---------------------------------------------
What, primarily, makes (Boost.)Err different from other proposals is the ability to (using latest C++ features) detect/distinguish between temporaries and 'saved' return values by using two different class templates (result wrappers), fallible_result (for rvalues) and result_or_error (for lvalues) and thus minimise or often completely eliminate any extra verbosity: If we had a function foo() that produces bar_t objects but can fail with an err_t, up till now we had two options: * bar_t foo() throw( err_t ); * optional
foo( err_t & ); Err enables the library writer to write a single API and a single implementation: err::fallible_result
foo(); where fallible_result is the class template that wraps temporaries/rvalues returned from functions and its member functions are all declared with && (i.e. callable only on rvalues) so you get a compiler error if you save it to an auto value and try to do anything with it. The two exceptions are the implicit conversion operators to: - bar_t, which will either return bar_t or throw err_t and which is used for the 'EH code path' bar_t my_bar( foo() ); - err::result_or_error which is used for the 'oldsk00l nothrow error code path' err::result_or_error maybe_bar( foo ); if ( maybe_bar ) { print( *maybe_bar ); } else { log( maybe_bar.error() ); } Also, if the fallible_result rvalue is left uninspected and contains an error its destructor will throw (which AFAICT should be safe considering it is an rvalue and no other exception can possibly be active at the same time) - this makes code that uses Err enabled libraries/APIs but relies on EH almost indistinguishable from 'classic EH' APIs (almost because the one difference that remains is with the immediate use of the return value: because of the wrapper class one can no longer write foo().do_something() but has to implicitly use the -> operator and write foo()->do_something() where the -> operator will check-and-throw if foo() did not succeed).
To wrap up, this approach gives the user the immediate and fine grained control over which error handling mechanism he/she wants to use while allowing the developer to write a single API and implementation - AFAICT it is a no brainer replacement for
, it is less verbose and much more efficient ;-) ps. the small library @ https://github.com/psiha/err is by no means a finished product, it contains no tests or documentation but 'it works' (i.e. it is used in the, also upcoming, https://github.com/psiha/mmap)...I'm bringing it to public view for scrutiny, discussion and guidance lest I steer in the wrong direction ;)
Hi,
I like the idea of having a movable only class that is used as the
result of a function and that ensure that the error is checked
(TBoost.Expected has an ensure_read error class that terminates if the
error is not read) and the conversion to a copyable that is used to pass
this result as parameter to a function.
However how do you prevent that the user forget to check for the error
in the converted err::result_or_error
participants (4)
-
Domagoj Saric
-
Domagoj Šarić
-
Gavin Lambert
-
Vicente J. Botet Escriba