Should pass boost::asio::io_service by raw pointer or smart pointer?
Hi, In many examples and even implementations, programmers like to pass boost::asio::io_service by raw pointer: MyClass::MyClass(boost::asio::io_service &io_service) : myRawIoService(io_service) {} Will this cause issues to pass raw pointer for large size of classes, should it be replaced by passing smart pointer: MyClass::MyClass(const std::shared_ptrboost::asio::io_service &io_service) : myIoService(io_service) {} Thank you.
On Wed, Dec 19, 2018 at 4:30 PM hh h via Boost
In many examples and even implementations, programmers like to pass boost::asio::io_service by raw pointer:
Those examples are wrong. `io_context` should be passed by reference.
Will this cause issues to pass raw pointer for large size of classes, should it be replaced by passing smart pointer:
A smart pointer is less efficient than a pointer, all else equal. But you shouldn't be using any form of pointer, the io_context (legacy name: io_service) is passed by reference. All of the Beast and Asio examples demonstrate this: https://github.com/boostorg/beast/tree/develop/example https://github.com/boostorg/asio/tree/develop/example Regards
Adding some lessons learned over the years to Vinnie's answer, because it's
not immediately obvious from the ASIO documentation...
You should normally think of an io_context as a global dependency of the
application. Create it in (or near) main() and pass a reference to it to
every component of your application that needs access to an io_context.
There are a couple of models for using ASIO:
* one io_service, one thread - initialise your io_service with an argument
of (1) to hint to ASIO that it can optimise for singfle-threaded code
* one io_service, N threads - default-initialise your io_service and use
strands to prevent contention in handlers.
* N threads with one io_service per thread - a very efficient model, which
will require you to manually load-balance connections by allocating them to
the io_context associated with the least current load. This should probably
not be attempted on your first try with ASIO.
There are some other things that are not always obvious:
* Many people keep shared_ptr's to sockets, deadline_timers and so on. This
is an error. All ASIO io objects are moveable. Store by value.
* SSL contexts are similar to io_contexts. There should be one (or perhaps
two if you're both accepting inbound connections and making outbound
connections) per application. Pass them by reference to your client/server
objects. Storing an ssl::context in an object makes it non-moveable and is
an error.
* Good ASIO development is aided by the principle of dependency injection.
* ASIO is easy to test - look at the methods io_context::poll(),
io_context::run_one(), io_context::stopped(), io_context::restart() and
io_context::strand::running_in_this_thread(). You can essentially
single-step ASIO in your unit tests. This is a strong reason for using
dependency injection (i.e. pass configuration data, contexts and executors
by reference).
* If you're multi-threading you'll either want to protect your objects with
mutexes (not so efficient) or strands (really efficient). However, don't
mix the models. If you're using strands then all access to your object from
outside should be via an async function which takes a closure and posts an
implementation of itself to the current object's strand. This ensures that
everything runs in the correct thread (which you can assert with
strand::running_in_this_thread() in your handler functions).
On Thu, 20 Dec 2018 at 11:14, Vinnie Falco via Boost
On Wed, Dec 19, 2018 at 4:30 PM hh h via Boost
wrote: In many examples and even implementations, programmers like to pass boost::asio::io_service by raw pointer:
Those examples are wrong. `io_context` should be passed by reference.
Will this cause issues to pass raw pointer for large size of classes, should it be replaced by passing smart pointer:
A smart pointer is less efficient than a pointer, all else equal. But you shouldn't be using any form of pointer, the io_context (legacy name: io_service) is passed by reference. All of the Beast and Asio examples demonstrate this:
https://github.com/boostorg/beast/tree/develop/example https://github.com/boostorg/asio/tree/develop/example
Regards
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Richard Hodges hodges.r@gmail.com office: +442032898513 home: +376841522 mobile: +376380212 (this will be *expensive* outside Andorra!) skype: madmongo facebook: hodges.r
Thanks Richard and Vinnie, that was indeed comprehensive.
* Many people keep shared_ptr's to sockets, deadline_timers and so on. This is an error. All ASIO io objects are moveable. Store by value.
That is the crux of clarifying to my post, so the io_service, context, sockets, deadline_times should all be stored by value not the shared pointer.
* SSL contexts are similar to io_contexts. There should be one (or perhaps two if you're both accepting inbound connections and making outbound connections) per application. Pass them by reference to your client/server objects. Storing an ssl::context in an object makes it non-moveable and is an error.
In my server, there are mixed connections per application, the ssl is
used for applications provided services and connections via the
Internet, and non-ssl connections are used for applications provided
internal services withing the could (I believe it should not have
security issues). I am going to use one global io_service and one
global context for both ssl and non-ssl (which did nothing about ssl
context) applications, I think it should be fine, but appreciate your
inside opinion.
Thank you very much and appreciate it.
- JHH
On 12/20/18, Richard Hodges via Boost
Adding some lessons learned over the years to Vinnie's answer, because it's not immediately obvious from the ASIO documentation...
You should normally think of an io_context as a global dependency of the application. Create it in (or near) main() and pass a reference to it to every component of your application that needs access to an io_context.
There are a couple of models for using ASIO:
* one io_service, one thread - initialise your io_service with an argument of (1) to hint to ASIO that it can optimise for singfle-threaded code * one io_service, N threads - default-initialise your io_service and use strands to prevent contention in handlers. * N threads with one io_service per thread - a very efficient model, which will require you to manually load-balance connections by allocating them to the io_context associated with the least current load. This should probably not be attempted on your first try with ASIO.
There are some other things that are not always obvious:
* Many people keep shared_ptr's to sockets, deadline_timers and so on. This is an error. All ASIO io objects are moveable. Store by value.
* SSL contexts are similar to io_contexts. There should be one (or perhaps two if you're both accepting inbound connections and making outbound connections) per application. Pass them by reference to your client/server objects. Storing an ssl::context in an object makes it non-moveable and is an error.
* Good ASIO development is aided by the principle of dependency injection.
* ASIO is easy to test - look at the methods io_context::poll(), io_context::run_one(), io_context::stopped(), io_context::restart() and io_context::strand::running_in_this_thread(). You can essentially single-step ASIO in your unit tests. This is a strong reason for using dependency injection (i.e. pass configuration data, contexts and executors by reference).
* If you're multi-threading you'll either want to protect your objects with mutexes (not so efficient) or strands (really efficient). However, don't mix the models. If you're using strands then all access to your object from outside should be via an async function which takes a closure and posts an implementation of itself to the current object's strand. This ensures that everything runs in the correct thread (which you can assert with strand::running_in_this_thread() in your handler functions).
On Thu, 20 Dec 2018 at 11:14, Vinnie Falco via Boost
wrote: On Wed, Dec 19, 2018 at 4:30 PM hh h via Boost
wrote: In many examples and even implementations, programmers like to pass boost::asio::io_service by raw pointer:
Those examples are wrong. `io_context` should be passed by reference.
Will this cause issues to pass raw pointer for large size of classes, should it be replaced by passing smart pointer:
A smart pointer is less efficient than a pointer, all else equal. But you shouldn't be using any form of pointer, the io_context (legacy name: io_service) is passed by reference. All of the Beast and Asio examples demonstrate this:
https://github.com/boostorg/beast/tree/develop/example https://github.com/boostorg/asio/tree/develop/example
Regards
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Richard Hodges hodges.r@gmail.com office: +442032898513 home: +376841522 mobile: +376380212 (this will be *expensive* outside Andorra!) skype: madmongo facebook: hodges.r
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Wed, Dec 19, 2018 at 10:08 PM Richard Hodges via Boost
Adding some lessons learned over the years to Vinnie's answer, because it's not immediately obvious from the ASIO documentation...
Wow... yeah. I have been saying that the "problem" with Asio / Networking TS is the lack of good documentation and tutorials. And this is not meant as a slight against the Asio author - it is a pretty hefty library with big implications and it would be unfair to presume that the burden of all present and future learning materials should fall on him. There are folks who think that networking is "too complicated" but again I feel this would be addressed by having good literature. Your post was great, it touched on a lot of good guidelines. We need this sort of thing in a readily accessible reference format, then I think networking will be seen in an even more favorable light.
If you're using strands then all access to your object from outside should be via an async function which takes a closure and posts an implementation of itself to the current object's strand. This ensures that everything runs in the correct thread (which you can assert with strand::running_in_this_thread() in your handler functions).
For member functions of application-level classes owning I/O objects, where those member functions can be called from foreign threads I use this idiom: void session::do_thing(Arg arg) { if(! strand_.running_in_this_thread()) return boost::asio::post( boost::asio::bind_executor(strand_), std::bind(&session::do_thing, shared_from_this(), arg)); // at this point, the thread of execution is on the strand async_do_something(socket_, arg, ...); } This way, do_thing can be called from anywhere. Regards
On Thu, Dec 20, 2018, 09:08 Richard Hodges via Boost
* Many people keep shared_ptr's to sockets, deadline_timers and so on. This is an error. All ASIO io objects are moveable. Store by value.
Hmmm... just double checking that my knowledge is up to date: there's no
way to create a timer, async wait for it and pass the ownership to the
callback. In other words, code like the following can not be improved in
C++14:
template
On Tue, Dec 25, 2018 at 4:54 AM Antony Polukhin via Boost
Hmmm... just double checking that my knowledge is up to date: there's no way to create a timer, async wait for it and pass the ownership to the callback. In other words, code like the following can not be improved in C++14:
The version of C++ has nothing to do with what is possible or how the asynchronous ownership model works. I/O objects need a stable address. After an asynchronous operation completes, it is likely that the operation will be repeated in the future. For example, you will likely read from a socket again. Therefore, your "operation" for delaying the execution of a single function object might look like this: class delayed_runner { net::steady_timer tm_; public: using clock_type = std::steady_clock; ... template <class NullaryFunction> NET_INITFN_RESULT_TYPE(NullaryFunction, void(void)) async_run_after (clock_type::duration expiry_time, NullaryFunction&&); template <class NullaryFunction> NET_INITFN_RESULT_TYPE(NullaryFunction, void(void)) async_run_at (clock_type::time_point expiry_time, NullaryFunction&&); void cancel(); }; The caller is responsible for ensuring this object is not destroyed while there is an outstanding operation. Similar to how a socket is treated, a `delayed_runner` would be a data member of some "session" object which is itself managed by `shared_ptr`. Completion handlers use a "handler owns I/O object" shared ownership model, so the function object contains a bound copy of the shared pointer, ensuring that the lifetime of the I/O object extends until the function object is invoked or destroyed. Regards
On Wed, Dec 26, 2018, 06:13 Vinnie Falco via Boost On Tue, Dec 25, 2018 at 4:54 AM Antony Polukhin via Boost
Hmmm... just double checking that my knowledge is up to date: there's no
way to create a timer, async wait for it and pass the ownership to the
callback. In other words, code like the following can not be improved in
C++14: The version of C++ has nothing to do with what is possible or how the
asynchronous ownership model works. I/O objects need a stable address.
After an asynchronous operation completes, it is likely that the
operation will be repeated in the future. For example, you will likely
read from a socket again. Therefore, your "operation" for delaying the
execution of a single function object might look like this: class delayed_runner
{
net::steady_timer tm_; public:
using clock_type = std::steady_clock;
...
template <class NullaryFunction>
NET_INITFN_RESULT_TYPE(NullaryFunction, void(void))
async_run_after (clock_type::duration expiry_time,
NullaryFunction&&); template <class NullaryFunction>
NET_INITFN_RESULT_TYPE(NullaryFunction, void(void))
async_run_at (clock_type::time_point expiry_time,
NullaryFunction&&); void cancel();
}; The caller is responsible for ensuring this object is not destroyed
while there is an outstanding operation. Similar to how a socket is
treated, a `delayed_runner` would be a data member of some "session"
object which is itself managed by `shared_ptr`. Completion handlers
use a "handler owns I/O object" shared ownership model, so the
function object contains a bound copy of the shared pointer, ensuring
that the lifetime of the I/O object extends until the function object
is invoked or destroyed. Yes, thank you, I knew that.
The statement was that the ASIO classes should be stored by value. I was
wondering about the possibility of that in some cases.
The version of the C++ Standard is relevant here. C++17 added guaranteed
copy elisions and brought more order into evaluation order :) Does it allow
to make a callback with stable address and an async waiting timer in it
without a call to operator new?
On Tue, Dec 25, 2018 at 11:03 PM Antony Polukhin
The version of the C++ Standard is relevant here. C++17 added guaranteed copy elisions and brought more order into evaluation order :) Does it allow to make a callback with stable address and an async waiting timer in it without a call to operator new?
C++ is not the issue here, it is the way that memory is allocated for continuations. The implementation necessarily allocates memory to store the I/O operation (since the initiating function returns to the caller while the operation is outstanding). When the completion handler is invoked, it is very likely that another asynchronous operation will be performed. And thus, another memory allocation. This predictable pattern of allocation permits an optimization: the completion handler is moved to the stack before being invoked, and the memory used to store the handler can be re-used for the next asynchronous operation performed in the same context. This is why completion handlers are MoveConstructible. The downside is that none of the data members of the handler are stable (including the `this` pointer). But it is more than made up for by the improved performance. You could, in theory, make your callback a shared_ptr or unique_ptr to some stable data but you said you didn't want a unique_ptr. Regards
On 26/12/2018 16:12, Vinnie Falco wrote:
The caller is responsible for ensuring this object is not destroyed while there is an outstanding operation. Similar to how a socket is treated, a `delayed_runner` would be a data member of some "session" object which is itself managed by `shared_ptr`. Completion handlers use a "handler owns I/O object" shared ownership model, so the function object contains a bound copy of the shared pointer, ensuring that the lifetime of the I/O object extends until the function object is invoked or destroyed.
I think people are mixing up the object layers, or following some older (erroneous) advice to "if in doubt, use shared_ptr everywhere". As I understand it, the general recommendation when using ASIO is to implement a higher-level object (eg. "session") which contains the underlying ASIO object (eg. "socket") by value, along with other related state such as strands, streambufs, diagnostic info, etc -- all stored by value. The higher-level object ("session") itself, however, should be given shared ownership (and kept in a shared_ptr) -- and thus generally non-copyable and non-moveable. And the shared_ptr (itself, not just "this") must be copied into handlers either via a lambda-capture or explicit parameter. This usually requires using shared_from_this(), although some other designs are possible. This is required in order to ensure that the session object exists as long as outstanding handlers exist, to avoid invoking callbacks with a deleted "this" and causing UB. Keeping the session object alive will inherently keep the socket, buffers, and other member state alive, so they don't need to themselves be shared_ptrs. Having the session object not be in a shared_ptr is theoretically possible but not really practical, as you'd have to be able to guarantee that its destructor is either not called until or does not return until all outstanding handlers involving that object have been called or discarded, and this isn't really feasible. (Closing or destroying a socket will cancel any pending operations but does not guarantee that the operations' completion handlers are called synchronously. There is a way to block until the handlers have been called, but this can deadlock if called in the wrong context.)
participants (5)
-
Antony Polukhin
-
Gavin Lambert
-
hh h
-
Richard Hodges
-
Vinnie Falco