Boost.Thread Continuation future with executor blocking in destructor
Hi, When creating a continuation future with future<>::then(), and the policy is boost::launch::async, the created future's destructor blocks, like a future created by boost::async, and probably for the same reasons (as outlined by the standardization committee in n3679). The same behavior is coded into continuation futures' destructor when using an executor. Why is it needed here? The situation with executor-run continuations is a little different, as even when the future is destroyed, the lifetime of running job is still bound to the executor. My use case: I've coded a network communication stack that returns to the caller a future<NetworkResponse>. I've previously used std::async(std::launch::deferred, ...) to transform the future's content before returning it to the caller (bytes -> protobuf -> internal object), and I consider such manipulation of a future value to be a very powerful feature. I've used std::launch::deferred to reduce the number of running threads, but the downside is that the client can't wait for the future value with a timeout. On the other side of the spectrum is std::launch::async, which would run a new thread - per pending communication - that would do little more than block. boost::future<>::then is a fantastic fit for my use case, as I can use my boost::asio::io_service as an executor and let my communication stack's threads do the work without having them block on future.get() to first retrieve the result. The caller can then call future<>::wait_for() to wait for the reply with a custom timeout. This being network communication, though, a reply message may never arrive - the corresponding promise will eventually be destroyed by the stack, but I can't have users block on the future's destructor until that happens, after they have already decided that the reply is no longer worth waiting for. Please advise if there's a workaround for this behavior that doesn't involve me distributing a custom version of Boost.Thread with my binaries? :) Konrad
Hi,
When creating a continuation future with future<>::then(), and the policy is boost::launch::async, the created future's destructor blocks, like a future created by boost::async, and probably for the same reasons (as outlined by the standardization committee in n3679).
The same behavior is coded into continuation futures' destructor when using an executor. Why is it needed here? The situation with executor-run continuations is a little different, as even when the future is destroyed, the lifetime of running job is still bound to the executor. An Executor can also create a thread for each task, and the Executor destructor don't need to wait until all the tasks have finished (even if some can do it). I don't see how to make the difference without making
My use case: I've coded a network communication stack that returns to the caller a future<NetworkResponse>. I've previously used std::async(std::launch::deferred, ...) to transform the future's content before returning it to the caller (bytes -> protobuf -> internal object), and I consider such manipulation of a future value to be a very powerful feature. I've used std::launch::deferred to reduce the number of running threads, but the downside is that the client can't wait for the future value with a timeout. On the other side of the spectrum is std::launch::async, which would run a new thread - per pending communication - that would do little more than block. boost::future<>::then is a fantastic fit for my use case, as I can use my boost::asio::io_service as an executor and let my communication stack's threads do the work without having them block on future.get() to first retrieve the result. The caller can then call future<>::wait_for() to wait for the reply with a custom timeout. This being network communication, though, a reply message may never arrive - the corresponding promise will eventually be destroyed by the stack, but I can't have users block on the future's destructor until that happens, after they have already decided that the reply is no longer worth waiting for. I would expect that the program must take care of the lost of communication and do a set_exception on the promise before the promise is destroyed. However, the call to wait_for seems to be a good hint that
Le 22/04/15 14:28, Konrad Zemek a écrit : the Executor concept more complex. Note however that, I have in my todo list an implementation that doesn't block at all. Just that everything need more time than we have. This would mean a breaking change, and all of us know how annoying is to introduce breaking changes. the user knows what is doing. I could consider that any call to a timed wait function disable blocking. An alternative could be to have a way to request to don't block. Please let me know if one of these options would take in account your use case. For the time being (and very temporarily) , if you don't want anymore this blocking future, you can move it to a list of detached futures.
Please advise if there's a workaround for this behavior that doesn't involve me distributing a custom version of Boost.Thread with my binaries? :)
You can send any PR you consider improves the behavior of the library. Best, Vicente
2015-04-23 0:32 GMT+02:00 Vicente J. Botet Escriba
Le 22/04/15 14:28, Konrad Zemek a écrit :
Hi,
When creating a continuation future with future<>::then(), and the policy is boost::launch::async, the created future's destructor blocks, like a future created by boost::async, and probably for the same reasons (as outlined by the standardization committee in n3679).
The same behavior is coded into continuation futures' destructor when using an executor. Why is it needed here? The situation with executor-run continuations is a little different, as even when the future is destroyed, the lifetime of running job is still bound to the executor.
An Executor can also create a thread for each task, and the Executor destructor don't need to wait until all the tasks have finished (even if some can do it). I don't see how to make the difference without making the Executor concept more complex.
You're right, I erroneously assumed that executors have to join on destruction much like a future returned from async. Re-reading the proposal I see that join behavior is specified for a concrete executor, not the general executor concept.
Note however that, I have in my todo list an implementation that doesn't block at all. Just that everything need more time than we have. This would mean a breaking change, and all of us know how annoying is to introduce breaking changes.
My use case: I've coded a network communication stack that returns to the caller a future<NetworkResponse>. I've previously used std::async(std::launch::deferred, ...) to transform the future's content before returning it to the caller (bytes -> protobuf -> internal object), and I consider such manipulation of a future value to be a very powerful feature. I've used std::launch::deferred to reduce the number of running threads, but the downside is that the client can't wait for the future value with a timeout. On the other side of the spectrum is std::launch::async, which would run a new thread - per pending communication - that would do little more than block. boost::future<>::then is a fantastic fit for my use case, as I can use my boost::asio::io_service as an executor and let my communication stack's threads do the work without having them block on future.get() to first retrieve the result. The caller can then call future<>::wait_for() to wait for the reply with a custom timeout. This being network communication, though, a reply message may never arrive - the corresponding promise will eventually be destroyed by the stack, but I can't have users block on the future's destructor until that happens, after they have already decided that the reply is no longer worth waiting for.
I would expect that the program must take care of the lost of communication and do a set_exception on the promise before the promise is destroyed. However, the call to wait_for seems to be a good hint that the user knows what is doing. I could consider that any call to a timed wait function disable blocking. An alternative could be to have a way to request to don't block. Please let me know if one of these options would take in account your use case.
Either of these options would work for me, as I could emulate a "don't block" request with a call to future<>::wait_for(0). I'd prefer the latter to the former though, if only because it fits my needs better and it would make it more explicit that future's behavior is modified.
For the time being (and very temporarily) , if you don't want anymore this blocking future, you can move it to a list of detached futures.
Please advise if there's a workaround for this behavior that doesn't involve me distributing a custom version of Boost.Thread with my binaries? :)
You can send any PR you consider improves the behavior of the library.
Of course; I just prefer to discuss it first to find out if others consider such a change to be an improvement. :) Konrad
how is boost doing on embedded environment in terms of efficiency? thanks
On Thu, Apr 23, 2015 at 5:03 AM, Trek
how is boost doing on embedded environment in terms of efficiency? thanks
The last time I targeted the ARM with Boost, it works pretty well. Much of this is contingent on the host OS, processor and supporting hardware arch, and so on, of course. But by and large, I couldn't complain; the high-res timer operations did very well, threading and futures worked splendidly, even signals/slots did pretty good. This was for embedded only, not real-time, per se. I might dare to call it soft-real-time; definitely the OS (ArchLinux) was not of the real-time variety. We had other issues going on, a kernel design philosophy that failed to regard application code (i.e. for things like resource file handling), multiple failures in duplicate I2C addressing, and things of this nature. Nothing that a disciplined, diligent, if vigilant, design process couldn't remedy. Nothing to do with Boost however. These aren't metrics of course, and you should run your own. My experience there was, by and large, technically very good. HTH
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
That's a fairly vague question. I use thread, data_time, chrono, filesystem, and system on an older embedded ARM processor. These libraries are great for my application. I recommend that you not just grab anything you see in Boost indiscriminately, but rather (as with anything else) look a little deeper and make sure the specific library is suitable. I think it's fair to say that all Boost libraries have efficiency as a consideration, but are most uncompromising about correctness. A few years back I used Boost.Python and found that it generated surprisingly large code in order to be correct in cases that didn't matter to me. The code size almost became problematic. Some libraries (I don't have an example at hand) do things that pretty clearly imply a fair amount of malloc() and free(). Just because it's Boost doesn't mean it's going to be suitable for your embedded application. Steven J. Clark VGo Communications From: Boost-users [mailto:boost-users-bounces@lists.boost.org] On Behalf Of Trek Sent: Thursday, April 23, 2015 5:04 AM To: boost-users Subject: [Boost-users] boost on embedded target such as ARM how is boost doing on embedded environment in terms of efficiency? thanks
Le 23/04/15 02:49, Konrad Zemek a écrit :
2015-04-23 0:32 GMT+02:00 Vicente J. Botet Escriba
: Le 22/04/15 14:28, Konrad Zemek a écrit :
I would expect that the program must take care of the lost of communication and do a set_exception on the promise before the promise is destroyed. However, the call to wait_for seems to be a good hint that the user knows what is doing. I could consider that any call to a timed wait function disable blocking. An alternative could be to have a way to request to don't block. Please let me know if one of these options would take in account your use case. Either of these options would work for me, as I could emulate a "don't block" request with a call to future<>::wait_for(0). I'd prefer the latter to the former though, if only because it fits my needs better and it would make it more explicit that future's behavior is modified.
Please don't forget to add a Track ticket, so that we don't forget this feature request. Best, Vicente
participants (5)
-
Konrad Zemek
-
Michael Powell
-
Steven Clark
-
Trek
-
Vicente J. Botet Escriba