On 2/02/2015 02:52, Aaron Levy wrote:
io_service svc; io_service::work work(svc);
thread t1( [&svc]() { svc.run(); } ); thread t2( [&svc]() { svc.run(); } ); thread t3( [&svc]() { svc.run(); } );
endpoint ep(ip::tcp::v4(), port); acceptor acceptor(svc, ep);
while (true) { shared_ptr<socket> sock(new socket(svc)); acceptor.accept(*sock);
svc.post( [sock]() { /* do stuff on sock here */ }); }
Is this way of using io_service for accepting tcp connections and also as a thread pool for serving connected clients valid or could I hit some undefined behavior.
A little of both. In general you can post whatever jobs you like to an io_service (including things that aren't I/O -- it's a great generic thread pool), but when multiple threads are running the service any one of those threads can end up running the job / handling the callback. Most of the io objects (eg. sockets), and indeed most other objects, are not intended for a single instance to be used concurrently from multiple threads. You can prevent this either by ensuring that only a single operation is "in flight" on a single object at a time (implicit strands) or that operations on the same object are explicitly synchronised via a strand object, or using some other mechanism (eg. locks), although the latter is less preferred. In the code above, you should be fine with regard to acceptor vs. sock, since you're only playing with one at a time. But you'll need to be careful if doing multiple operations on sock. Also, I could be wrong about this, but I think if you eg. perform a blocking read inside your sock job it will tie up a whole thread for the duration, which means that you will quickly run out if you get multiple connections. Using async code should avoid this.