We found a dedicated thread per io_service made things a lot easier. Couple that with your favourite flavour of concurrent queue to keep io async callback code short. Strands seem perfect if you want to kill any scalability so we never even considered patterns that required them. On Tuesday, June 19, 2018, Gavin Lambert via Boost-users < boost-users@lists.boost.org> wrote:
On 19/06/2018 07:34, james wrote:
No, not per socket. One per port that accepts connections if you're a server, and one per host you're connecting to if you're a client.
While you *can* do that, it's unusual and doesn't really provide any particular benefit over the models that Vinnie specified.
Essentially, if you make one thread per endpoint then you have the same thing as the "one io_context per thread" model that Vinnie spoke of.
If you have multiple independent threadpools per endpoint then you have the combination of the "one io_context per thread" and "one io_context, multiple threads" models (which works, but is still unusual as you start to run into more drawbacks than benefits with this pattern; so I wouldn't recommend it).
Otherwise, you have multiple io_contexts per thread, which is a Bad Idea™.
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users