[ASIO] Do I have to use shared_ptrs with ASIO?
Hi Everyone, I am looking at an example implementation of an asynchronous TCP echo server in Boost.ASIO library: https://www.boost.org/doc/libs/1_77_0/doc/html/boost_asio/example/cpp11/echo... It uses a shared_ptr in the implementation to store the things related to session: socket handle, data buffers. I tried to modify it in a number of ways to avoid the usage of shared_ptr (and any form of shared ownership), but they all failed. It looks like I have to use a shared_pointer (or equivalent) when I use ASIO sockets. But conceptually, there is never a point in time where two tasks have to access members of this `session` concurrently: they are always sequenced. So, at least conceptually, there should be a way to do this with a session type that is only movable, with unique ownership: a task does its job with the session and then moves it to the subsequent task. So is this necessity to use a shared pointer an unnecessary and suboptimal design choice? Or is there something fundamental that prevents the unique ownership of buffers and socket handles? Or maybe is there a way to do it in ASIO that I overlooked? I am not trying to solve any specific problem. It just strikes me that a shared_ptr is used in a demo example for the library. I was always of the opinion that a shared_ptr is often abused as a way of doing an ad-hoc GC. Regards, &rzej;
On Tue, Sep 21, 2021 at 7:34 AM Andrzej Krzemienski via Boost
I am not trying to solve any specific problem. It just strikes me that a shared_ptr is used in a demo example for the library. I was always of the opinion that a shared_ptr is often abused as a way of doing an ad-hoc GC.
In theory it should work since the echo protocol is half-duplex. What happens when you switch it to unique_ptr? Move-only handlers should work, but it is possible that Chris missed a place that is still doing a copy. Thanks
wt., 21 wrz 2021 o 16:50 Vinnie Falco via Boost
On Tue, Sep 21, 2021 at 7:34 AM Andrzej Krzemienski via Boost
wrote: I am not trying to solve any specific problem. It just strikes me that a shared_ptr is used in a demo example for the library. I was always of the opinion that a shared_ptr is often abused as a way of doing an ad-hoc GC.
In theory it should work since the echo protocol is half-duplex. What happens when you switch it to unique_ptr? Move-only handlers should work, but it is possible that Chris missed a place that is still doing a copy.
It breaks when I pass a callback (completion handler), for instance in: void do_read() { auto self(shared_from_this()); socket_.async_read_some(boost::asio::buffer(data_, max_length), [this, *self*](boost::system::error_code ec, std::size_t length) { if (!ec) { do_write(length); } }); } I would need to move the data inside the lambda capture, but if I do it, the subsequent call to socket_.async_read_some() is UB. In order for this to work, the function `async_read_some()` would have to pass the socket back to my handler after it has performed the read. Regards, &rzej;
I haven't tested it, but this should probably work:
class session {
public:
session(tcp::socket socket)
: session(std::make_unique
wt., 21 wrz 2021 o 16:50 Vinnie Falco via Boost
napisał(a): On Tue, Sep 21, 2021 at 7:34 AM Andrzej Krzemienski via Boost
wrote: I am not trying to solve any specific problem. It just strikes me that a shared_ptr is used in a demo example for the library. I was always of the opinion that a shared_ptr is often abused as a way of doing an ad-hoc GC.
In theory it should work since the echo protocol is half-duplex. What happens when you switch it to unique_ptr? Move-only handlers should work, but it is possible that Chris missed a place that is still doing a copy.
It breaks when I pass a callback (completion handler), for instance in:
void do_read() { auto self(shared_from_this()); socket_.async_read_some(boost::asio::buffer(data_, max_length), [this, *self*](boost::system::error_code ec, std::size_t length) { if (!ec) { do_write(length); } }); }
I would need to move the data inside the lambda capture, but if I do it, the subsequent call to socket_.async_read_some() is UB.
In order for this to work, the function `async_read_some()` would have to pass the socket back to my handler after it has performed the read.
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Tue, 21 Sept 2021 at 17:22, Дмитрий Архипов via Boost < boost@lists.boost.org> wrote:
I haven't tested it, but this should probably work:
class session { public: session(tcp::socket socket) : session(std::make_unique
(std::move(socket))) {} void start() { do_read(); }
private: session(std::unique_ptr
impl) : impl_(std::move(impl)) {} void do_read() { auto& impl = *impl_; impl.socket.async_read_some( boost::asio::buffer(impl.data, max_length), [impl = std::move(impl_)](boost::system::error_code ec, std::size_t length) mutable { if (ec) { return; } session(std::move(impl)).do_write(length); }); }
void do_write(std::size_t length) { ... }
auto constexpr max_length = 1024; struct session_impl { session_impl(tcp::socket socket) : socket_(std::move(socket)) {} tcp::socket socket_; char data_[max_length]; }; };
...
session(std::move(socket)).start();
вт, 21 сент. 2021 г. в 18:02, Andrzej Krzemienski via Boost
: wt., 21 wrz 2021 o 16:50 Vinnie Falco via Boost
napisał(a): On Tue, Sep 21, 2021 at 7:34 AM Andrzej Krzemienski via Boost
wrote: I am not trying to solve any specific problem. It just strikes me
that a
shared_ptr is used in a demo example for the library. I was always of the opinion that a shared_ptr is often abused as a way of doing an ad-hoc GC.
Dear Andrzej, The use of shared_ptr in asio examples is largely an historic artefact from when the only completion type was only a function object, as in these examples. When using this style of asynchronous completion it is convenient to use some kind of reference-counted pointer to maintain the lifetime of IO objects such as sockets and their associated state, such as buffers. You don't *have* to use a reference-counting but you will need to ensure that the io objects and related buffers are not deleted while there are any asynchronous operations in progress against them. A shared_ptr happens to be the most convenient way of doing this when: - using continuation-passing style completion handlers, and - the lifetime of the io object is not deterministic. When using other styles of completion handling, such as the in-built coroutine support, reference counting can often become less important since code can be written in a "structured concurrency" style. In this way, asynchronous code can be laid out with the same structure as synchronous code. IO Object lifetimes then become governed by the coroutines that created them and are awaiting asynchronous operations against them.
In theory it should work since the echo protocol is half-duplex. What happens when you switch it to unique_ptr? Move-only handlers should work, but it is possible that Chris missed a place that is still doing a copy.
It breaks when I pass a callback (completion handler), for instance in:
void do_read() { auto self(shared_from_this()); socket_.async_read_some(boost::asio::buffer(data_, max_length), [this, *self*](boost::system::error_code ec, std::size_t length) { if (!ec) { do_write(length); } }); }
I would need to move the data inside the lambda capture, but if I do it, the subsequent call to socket_.async_read_some() is UB.
In order for this to work, the function `async_read_some()` would have to pass the socket back to my handler after it has performed the read.
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
wt., 21 wrz 2021 o 22:47 Richard Hodges via Boost
On Tue, 21 Sept 2021 at 17:22, Дмитрий Архипов via Boost < boost@lists.boost.org> wrote:
I haven't tested it, but this should probably work:
class session { public: session(tcp::socket socket) : session(std::make_unique
(std::move(socket))) {} void start() { do_read(); }
private: session(std::unique_ptr
impl) : impl_(std::move(impl)) {} void do_read() { auto& impl = *impl_; impl.socket.async_read_some( boost::asio::buffer(impl.data, max_length), [impl = std::move(impl_)](boost::system::error_code ec, std::size_t length) mutable { if (ec) { return; } session(std::move(impl)).do_write(length); }); }
void do_write(std::size_t length) { ... }
auto constexpr max_length = 1024; struct session_impl { session_impl(tcp::socket socket) : socket_(std::move(socket)) {} tcp::socket socket_; char data_[max_length]; }; };
...
session(std::move(socket)).start();
вт, 21 сент. 2021 г. в 18:02, Andrzej Krzemienski via Boost
: wt., 21 wrz 2021 o 16:50 Vinnie Falco via Boost
napisał(a):
I am not trying to solve any specific problem. It just strikes me
On Tue, Sep 21, 2021 at 7:34 AM Andrzej Krzemienski via Boost
wrote: that a shared_ptr is used in a demo example for the library. I was always of the opinion that a shared_ptr is often abused as a way of doing an ad-hoc GC.
Dear Andrzej,
The use of shared_ptr in asio examples is largely an historic artefact from when the only completion type was only a function object, as in these examples.
When using this style of asynchronous completion it is convenient to use some kind of reference-counted pointer to maintain the lifetime of IO objects such as sockets and their associated state, such as buffers. You don't *have* to use a reference-counting but you will need to ensure that the io objects and related buffers are not deleted while there are any asynchronous operations in progress against them. A shared_ptr happens to be the most convenient way of doing this when: - using continuation-passing style completion handlers, and - the lifetime of the io object is not deterministic.
When using other styles of completion handling, such as the in-built coroutine support, reference counting can often become less important since code can be written in a "structured concurrency" style. In this way, asynchronous code can be laid out with the same structure as synchronous code. IO Object lifetimes then become governed by the coroutines that created them and are awaiting asynchronous operations against them.
Thank you for this insightful reply. My impression when looking at ASIO examples is similar. Either I decide to use function callbacks, and in this case I will use reference counting (or apply a very clever trick), or I will use C++ coroutines. In either case there will be an additional heap memory allocation required: either manually by me, or indirectly by the coroutine machinery. I wonder if heap allocation is a necessary consequence of asynchronous computations, or is it only the choice of the interface in ASIO. Regards, &rzej;
In theory it should work since the echo protocol is half-duplex. What happens when you switch it to unique_ptr? Move-only handlers should work, but it is possible that Chris missed a place that is still
doing
a copy.
It breaks when I pass a callback (completion handler), for instance in:
void do_read() { auto self(shared_from_this()); socket_.async_read_some(boost::asio::buffer(data_, max_length), [this, *self*](boost::system::error_code ec, std::size_t length) { if (!ec) { do_write(length); } }); }
I would need to move the data inside the lambda capture, but if I do it, the subsequent call to socket_.async_read_some() is UB.
In order for this to work, the function `async_read_some()` would have to pass the socket back to my handler after it has performed the read.
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On 22/09/2021 15:39, Andrzej Krzemienski via Boost wrote:
I wonder if heap allocation is a necessary consequence of asynchronous computations, or is it only the choice of the interface in ASIO.
Internally ASIO will dynamically allocate additional state to store async ops in flight. Upon completion, those go into a free list and are reused. You cannot stop ASIO doing at least some form of dynamic memory allocation, albeit you can mitigate it using a custom Allocator. But to answer your question more generally, it is quite hard to avoid heap allocation for asynchronous *anything*. As Vinnie pointed out, you can't move state while something async is occurring. So there are two choices, either you use heap allocation or something gives you an opaque handle and you store that opaque handle in a map to state, which is probably a heap allocation. Sender-Receiver can avoid all heap allocations, but the price is you must invert your program design to use Sender-Receiver by describing all possible control flow paths at compile time, and that feels most unnatural plus it requires lots of typing. It's also an exact inversion of design from ASIO, which is based around code dynamically reacting to i/o state changes, whereas Sender-Receiver involves traversing a static state transition graph. They are chalk and cheese. Taking a bigger picture again, OS kernels almost always allocate memory inside the kernel for async i/o, whereas they usually can avoid it for blocking and non-blocking i/o. This is why blocking i/o can perform better than async i/o in some circumstances, because the kernel doesn't call its internal malloc. So I'd say there is a strong, but not fixed, correlation between async anything and dynamic memory allocation, in general. Niall
Niall Douglas wrote:
But to answer your question more generally, it is quite hard to avoid heap allocation for asynchronous *anything*. As Vinnie pointed out, you can't move state while something async is occurring. So there are two choices, either you use heap allocation or something gives you an opaque handle and you store that opaque handle in a map to state, which is probably a heap allocation.
Hypothetically, what Andrzej was asking for was: he gives Asio a movable state object and a movable completion handler, Asio puts these somewhere, initiates the async op, doesn't move them in the meantime, when the async op completes invokes the handler, passing it the address of the state object, the handler then either moves the state back somewhere, or gives it back to Asio for the next async call. This looks doable, although in practice probably won't gain much.
On Wed, 22 Sept 2021 at 19:40, Peter Dimov via Boost
Niall Douglas wrote:
But to answer your question more generally, it is quite hard to avoid heap allocation for asynchronous *anything*. As Vinnie pointed out, you can't move state while something async is occurring. So there are two choices, either you use heap allocation or something gives you an opaque handle and you store that opaque handle in a map to state, which is probably a heap allocation.
There is another strategy in common use in financial markets, in which the total memory footprint is allocated up front on program start. The technique is in production use with ASIO in such applications as commodity and stock exchanges. The total amount of memory per connection is known up front, as is the total number of allowed simultaneous connections. Asio can take advantage of this as Niall mentioned, by associating a custom allocator with the IO executor and the handlers of asynchronous operations. The net effect is zero allocations and no thread synchronisation (most matching engines run in a single thread in a hot loop with kernel-bypass network drivers). All this functionality is available out of the box, with the exception of the custom allocator, which you would need to write yourself to suit your chosen memory recycling strategy.
Hypothetically, what Andrzej was asking for was: he gives Asio a movable state object and a movable completion handler, Asio puts these somewhere, initiates the async op, doesn't move them in the meantime, when the async op completes invokes the handler, passing it the address of the state object, the handler then either moves the state back somewhere, or gives it back to Asio for the next async call.
This looks doable, although in practice probably won't gain much.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Wed, Sep 22, 2021 at 10:50 AM Richard Hodges via Boost
There is another strategy in common use in financial markets, in which the total memory footprint is allocated up front on program start.
That is the approach chosen here: https://github.com/boostorg/beast/blob/b15a5ff0e47e72ba3d4711d2514bc65749d03... Which, coincidentally, was written by Christopher Kohlhoff.... Thanks
Am Mi., 22. Sept. 2021 um 16:39 Uhr schrieb Andrzej Krzemienski via Boost < boost@lists.boost.org>:
Either I decide to use function callbacks, and in this case I will use reference counting (or apply a very clever trick), or I will use C++ coroutines.
You could use boost::asio:spawn() (stackful coroutines/fibers) too and write async code that looks sequential.
wt., 21 wrz 2021 o 17:22 Дмитрий Архипов via Boost
I haven't tested it, but this should probably work:
class session { public: session(tcp::socket socket) : session(std::make_unique
(std::move(socket))) {} void start() { do_read(); }
private: session(std::unique_ptr
impl) : impl_(std::move(impl)) {} void do_read() { auto& impl = *impl_; impl.socket.async_read_some( boost::asio::buffer(impl.data, max_length), [impl = std::move(impl_)](boost::system::error_code ec, std::size_t length) mutable { if (ec) { return; } session(std::move(impl)).do_write(length); }); }
void do_write(std::size_t length) { ... }
auto constexpr max_length = 1024; struct session_impl { session_impl(tcp::socket socket) : socket_(std::move(socket)) {} tcp::socket socket_; char data_[max_length]; }; };
...
session(std::move(socket)).start();
Thank you for this example. I haven't compiled it yet, but my superficial analysis tells me that it should work. I asked the question, you gave the right answer, so I should only be thankful, I guess. But having seen the answer, I now realize that I asked the wrong question. The question that I really wanted to ask is: do I have to heap-allocate "IO" objects in order to use tem with continuation-passing style completion handlers? My observation about the example code: void do_read() { auto self(shared_from_this()); *socket_ *// <-- socket passed in one place .async_read_some(boost::asio::buffer(data_, max_length), [this, *self*] // <-- socket passed in another place (boost::system::error_code ec, std::size_t length) { if (!ec) { do_write(length); } }); } is that I have to pass my socket in two places in one function invocation. Whereas, at least at high level, I could imagine an interface when I only pass it once (by movig) to function async_read_some, and after the function has made the read it passes (via a move) the socket argument to the handler. Is there something fundamental that makes this impossible? Regards, &rzej;
вт, 21 сент. 2021 г. в 18:02, Andrzej Krzemienski via Boost
: wt., 21 wrz 2021 o 16:50 Vinnie Falco via Boost
napisał(a): On Tue, Sep 21, 2021 at 7:34 AM Andrzej Krzemienski via Boost
wrote: I am not trying to solve any specific problem. It just strikes me
that a
shared_ptr is used in a demo example for the library. I was always of the opinion that a shared_ptr is often abused as a way of doing an ad-hoc GC.
In theory it should work since the echo protocol is half-duplex. What happens when you switch it to unique_ptr? Move-only handlers should work, but it is possible that Chris missed a place that is still doing a copy.
It breaks when I pass a callback (completion handler), for instance in:
void do_read() { auto self(shared_from_this()); socket_.async_read_some(boost::asio::buffer(data_, max_length), [this, *self*](boost::system::error_code ec, std::size_t length) { if (!ec) { do_write(length); } }); }
I would need to move the data inside the lambda capture, but if I do it, the subsequent call to socket_.async_read_some() is UB.
In order for this to work, the function `async_read_some()` would have to pass the socket back to my handler after it has performed the read.
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Wed, Sep 22, 2021 at 7:34 AM Andrzej Krzemienski via Boost
I now realize that I asked the wrong question. The question that I really wanted to ask is: do I have to heap-allocate "IO" objects in order to use tem with continuation-passing style completion handlers?
Are you really asking "can I move I/O objects while they are performing asynchronous operations?" Thanks
śr., 22 wrz 2021 o 16:38 Vinnie Falco
On Wed, Sep 22, 2021 at 7:34 AM Andrzej Krzemienski via Boost
wrote: I now realize that I asked the wrong question. The question that I really wanted to ask is: do I have to heap-allocate "IO" objects in order to use tem with continuation-passing style completion handlers?
Are you really asking "can I move I/O objects while they are performing asynchronous operations?"
I meant to move a tcp::socket. I am actually not sure if this fits into the definition of "I/O object", so I may have used the term incorrectly. So, to rephrase, I assume that I can move construct a `tcp::socket`, and now I ask if there is an interface or a technique within ASIO that would allow me to move the `tcp::socket` and the buffer rather than heap-allocate them? Maybe I am talking nonsense, and I just do not understand the concurrency model here. But at the first glance, I see nothing that would make this impossible. Regards, &rzej;
Andrzej Krzemienski wrote:
So, to rephrase, I assume that I can move construct a `tcp::socket`, and now I ask if there is an interface or a technique within ASIO that would allow me to move the `tcp::socket` and the buffer rather than heap-allocate them?
How would you move a buffer without heap-allocating it though :-)
śr., 22 wrz 2021 o 17:01 Peter Dimov via Boost
Andrzej Krzemienski wrote:
So, to rephrase, I assume that I can move construct a `tcp::socket`, and now I ask if there is an interface or a technique within ASIO that would allow me to move the `tcp::socket` and the buffer rather than heap-allocate them?
How would you move a buffer without heap-allocating it though :-)
If I use std::vector<> as a buffer, then it allocates in its implementation, but I do not have to do the second allocation to allocate the std::vector<> itself. I think I see your point, though. This would be only an exercise to prove the point, but with no practical gain. Thanks, &rzej;
Andrzej Krzemienski wrote:
My observation about the example code:
void do_read() { auto self(shared_from_this()); *socket_ *// <-- socket passed in one place .async_read_some(boost::asio::buffer(data_, max_length), [this, *self*] // <-- socket passed in another place (boost::system::error_code ec, std::size_t length) { if (!ec) { do_write(length); } }); }
is that I have to pass my socket in two places in one function invocation. Whereas, at least at high level, I could imagine an interface when I only pass it once (by movig) to function async_read_some, and after the function has made the read it passes (via a move) the socket argument to the handler.
It's not that easy because you don't need just the socket, the entire session state would have to be moved into the handler, which at minimum includes the buffer. Could still be possible in principle? I don't know. It's not clear if it will be a win, because if you put the e.g. 4K buffer into the session state, it will be costly to move, so heap-allocating it may be better (as then moving the unique_ptr is cheap).
Hi everyone,
Back in September Dmitry offered the following alternative to the to the
asynchronous TCP echo server example (
https://www.boost.org/doc/libs/1_77_0/doc/html/boost_asio/example/cpp11/echo...)
that uses a unique_ptr instead of shared_ptr for managing the lifetime of
the session (the socket and the buffer):
wt., 21 wrz 2021 o 17:22 Дмитрий Архипов via Boost
I haven't tested it, but this should probably work:
class session { public: session(tcp::socket socket) : session(std::make_unique
(std::move(socket))) {} void start() { do_read(); }
private: session(std::unique_ptr
impl) : impl_(std::move(impl)) {} void do_read() { auto& impl = *impl_; impl.socket.async_read_some( boost::asio::buffer(impl.data, max_length), [impl = std::move(impl_)](boost::system::error_code ec, std::size_t length) mutable { if (ec) { return; } session(std::move(impl)).do_write(length); }); }
void do_write(std::size_t length) { ... }
auto constexpr max_length = 1024; struct session_impl { session_impl(tcp::socket socket) : socket_(std::move(socket)) {} tcp::socket socket_; char data_[max_length]; }; };
...
session(std::move(socket)).start();
The idea here is that the last completion handler to be installed is responsible for calling the destructor. But this can only work under the assumption that the ASIO framework guarantees that the completion handler is destroyed after all the preceding operations have finished. Otherwise we can envision the situation where the destructor in the handler is called but the previous operation is still trying to access the socket. Under normal circumstances, it should be quite obvious that the previous operation is finished before the handler is invoked and destroyed. But it is not that obvious when we add the operation cancellation into the picture. I could imagine an implementation where an operation receives a cancellation request, it already knows that it will not be calling the completion handler, so it destroys the handler, and then does its own cleanup. So, my question is: does ASIO give a guarantee that it destroys the completion handler after the preceding operations have finished? Regards, &rzej;
<snip>
So, my question is: does ASIO give a guarantee that it destroys the completion handler after the preceding operations have finished?
Asio will perform the following operations when about to invoke a completion handler: - extract the handler from its surrounding async operation via std::move (assuming C++11 or better, otherwise it copies it). - destroy/deallocate any dynamic state in the async operation. - invoke the handler. - destroy the handler. This assumes that the completion handler is invoked prior to the destruction of the execution context with which it is associated (the normal case). In the corner case of the execution context being destroyed, all outstanding async operations (which would include pending completion handlers) are destroyed.
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
śr., 29 gru 2021 o 15:26 Richard Hodges via Boost
<snip>
Thank you for the quick reply.
So, my question is: does ASIO give a guarantee that it destroys the
completion handler after the preceding operations have finished?
Asio will perform the following operations when about to invoke a completion handler:
- extract the handler from its surrounding async operation via std::move (assuming C++11 or better, otherwise it copies it). - destroy/deallocate any dynamic state in the async operation. - invoke the handler. - destroy the handler.
This assumes that the completion handler is invoked prior to the destruction of the execution context with which it is associated (the normal case).
Yes.
In the corner case of the execution context being destroyed, all outstanding async operations (which would include pending completion handlers) are destroyed.
So, I am not sure how many corner cases there are. One would be to destroy the execution context (like io_context). I understand that requesting a cancellation of an asynchronous operation is also a kind of special case: we do not expect the completion handler to be invoked. (Am I right?) I imagine that in order to provide the cancellation guarantee as expressed in https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/overview/core/canc... the framework still needs to execute some instructions. These final cleanup instructions might still require resources that the user provided (like the socket). The implementation might first execute these cleanup instructions and then destroy the completion handler, or do the opposite: first destroy the completion handler and then call the cleanup instructions. The resource handling strategy with the unique_ptr in the completion handler will work correctly in the first case, but not in the second. So, I would need an additional guarantee that says that the asynchronous operation performs no logic (other than the call to destructor) after destroying the completion handler -- even in the event of cancellation. Or maybe the resource-handling strategy in my example is not supported? Regards, &rzej;
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
A part of our discussion accidentally went private, so I am reposting it
here:
śr., 29 gru 2021 o 17:48 Richard Hodges
On Wed, 29 Dec 2021 at 16:04, Andrzej Krzemienski
wrote: śr., 29 gru 2021 o 15:26 Richard Hodges via Boost
napisał(a): <snip>
Thank you for the quick reply.
So, my question is: does ASIO give a guarantee that it destroys the
completion handler after the preceding operations have finished?
Asio will perform the following operations when about to invoke a completion handler:
- extract the handler from its surrounding async operation via std::move (assuming C++11 or better, otherwise it copies it). - destroy/deallocate any dynamic state in the async operation. - invoke the handler. - destroy the handler.
This assumes that the completion handler is invoked prior to the destruction of the execution context with which it is associated (the normal case).
Yes.
In the corner case of the execution context being destroyed, all outstanding async operations (which would include pending completion handlers) are destroyed.
So, I am not sure how many corner cases there are. One would be to destroy the execution context (like io_context).
This is the only one. If you stop() the io_context but do not destroy it, the outstanding async operations and completion handlers stay alive until either they are allowed to complete (after calling resume()).
I understand that requesting a cancellation of an asynchronous operation is also a kind of special case: we do not expect the completion handler to be invoked. (Am I right?)
No, we expect the completion handler to be invoked either:
a) with the error code asio::error::operation_aborted, in the case that our cancellation signal was noticed prior to the operation's completion, or b) with the result of the operation (whether success or failure) in the case where our cancellation happened after the completion handler of the current operation(s) were posted to the execution context for invocation.
I imagine that in order to provide the cancellation guarantee as expressed in https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/overview/core/canc... the framework still needs to execute some instructions. These final cleanup instructions might still require resources that the user provided (like the socket). The implementation might first execute these cleanup instructions and then destroy the completion handler, or do the opposite: first destroy the completion handler and then call the cleanup instructions. The resource handling strategy with the unique_ptr in the completion handler will work correctly in the first case, but not in the second. So, I would need an additional guarantee that says that the asynchronous operation performs no logic (other than the call to destructor) after destroying the completion handler -- even in the event of cancellation. Or maybe the resource-handling strategy in my example is not supported?
Think of cancellation as a request to cancel. Whether the cancellation signal landed in time, or whether the operation agreed to cancel itself can be discovered when the completion handler is invoked, which is _always_ expected unless you have stopped the execution context.
Thank you for the clarification. Regards, &rzej;
Regards, &rzej;
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
śr., 29 gru 2021 o 16:04 Andrzej Krzemienski
śr., 29 gru 2021 o 15:26 Richard Hodges via Boost
napisał(a): <snip>
Thank you for the quick reply.
So, my question is: does ASIO give a guarantee that it destroys the
completion handler after the preceding operations have finished?
Asio will perform the following operations when about to invoke a completion handler:
- extract the handler from its surrounding async operation via std::move (assuming C++11 or better, otherwise it copies it). - destroy/deallocate any dynamic state in the async operation. - invoke the handler. - destroy the handler.
This assumes that the completion handler is invoked prior to the destruction of the execution context with which it is associated (the normal case).
Yes.
In the corner case of the execution context being destroyed, all outstanding async operations (which would include pending completion handlers) are destroyed.
So, I am not sure how many corner cases there are. One would be to destroy the execution context (like io_context). I understand that requesting a cancellation of an asynchronous operation is also a kind of special case: we do not expect the completion handler to be invoked. (Am I right?) I imagine that in order to provide the cancellation guarantee as expressed in https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/overview/core/canc... the framework still needs to execute some instructions. These final cleanup instructions might still require resources that the user provided (like the socket). The implementation might first execute these cleanup instructions and then destroy the completion handler, or do the opposite: first destroy the completion handler and then call the cleanup instructions. The resource handling strategy with the unique_ptr in the completion handler will work correctly in the first case, but not in the second. So, I would need an additional guarantee that says that the asynchronous operation performs no logic (other than the call to destructor) after destroying the completion handler -- even in the event of cancellation. Or maybe the resource-handling strategy in my example is not supported?
Regards, &rzej;
Regards, &rzej;
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
participants (8)
-
Andrzej Krzemienski
-
Niall Douglas
-
Oliver Kowalke
-
Peter Dimov
-
Richard Hodges
-
Seth
-
Vinnie Falco
-
Дмитрий Архипов