boost::asio blocking socket read with timeout with multiple threads
I've got a boost::asio based server, and generally it's working well. The server runs multiple threads, each simply running the io_service::run method on the same io_service object. It accepts multiple connections and processes multiple concurrent sockets. It uses a strand per connection to ensure that the data for each connection is processed strictly sequentially. For various reasons it uses a mixture of async and sync reads. The trouble I'm having is implementing a timeout on the synchronous (blocking) reads. I want to protect the server in the case of a client sending an erroneous or malicious packet for example, and make sure a read doesn't block forever. There was a post back in 2007 that gave a technique (https://lists.boost.org/Archives/boost/2007/04/120339.php). There's a stack overflow question that references that (https://stackoverflow.com/questions/13126776/asioread-with-timeout). There are some examples in the docs that perform a similar kind of technique. In the examples, the server is all asynchronous. The client is synchronous, but only uses one thread. The example from 2007 also only uses one thread. The examples all revolve around the technique of starting an *async* read, then performing a nested loop of io_service::run_one. However, so far I've been unable to find a form of code that works reliably in a multithreaded environment. The basis for my experimentation is here: https://gist.github.com/anonymous/1160c11f8ed9c29b9184325191a3a63b It starts a server thread, then starts a client that makes a connection and then writes nothing, to simulate a "bad" client and to provoke a "timeout" condition. What I'm finding is this: With one server thread I can make it work OK. The nested loop of "run_one()" operates correctly. With multiple server threads though, I can't make it work reliably. The basis of the loop is: bool timedOut = false; bool readComplete = false; timer.async_wait(strand_.wrap(boost::bind(&handleReadTimeout, &timedOut, &socket_, _1))); async_read(socket_, buffer(b), strand_.wrap(boost::bind(handleReadComplete, &ec, &bytesRead, &readComplete, &timer, _1, _2))); while (socket_.get_io_service().run_one()) { if (timedOut && readComplete) { break; } } handleReadTimeout sets the timedOut flag to true and calls "cancel" on the socket. The handleReadComplete callback sets readComplete to true and calls cancel on the timer. With multiple threads, the handleReadTimeout/handleReadComplete callbacks are run on other threads. So the while loop here just blocks, as there is never anything to run. That's my surmise of what's going on anyway. I've experimented with strands to try and force it all onto the same thread, but so far failed (if the above code is called in the context of the same strand, it just seems to block the handleReadTimeout and handleReadComplete callbacks from being called). Note that if I expend my example to include other clients talking to a simple "echo" service to simulate other traffic, then the while loop above *does* wake, as that other traffic gives it something to do. But that's fragile. The alternative formulation, the one I started with actually, is to do an async wait on the timer, and a normal synchronous read on the socket. The timer callback performs a cancel (or close -- I tried both) on the socket hoping to cause the socket read to error. This is the kind of thing you'd do if you were programming raw sockets. That works fine on Windows, but won't work on Linux. The documentation for cancel does say it cancels any outstanding *async* calls, but I'm surprised calling close doesn't work and cause the read to wake. So I'm stuck. I was hoping someone here could point me in the right direction, as it seems an obvious thing (to me) to want try and do. Is there a form of code that allows me to perform a synchronous read from a socket and time it out, which will work in a multi threaded/single io service environment? This is all with boost 1.64 by the way. Thanks for any help in advance.
On Thu, Mar 15, 2018 at 07:04:19PM +0000, Thomas Quarendon via Boost-users wrote:
The examples all revolve around the technique of starting an *async* read, then performing a nested loop of io_service::run_one. However, so far I've been unable to find a form of code that works reliably in a multithreaded environment.
I played around with this, and I don't really see how this can work reliably when called from _within_ the io_service. I don't belive the io_service was intended to be used in this re-entrant manner.
The basis for my experimentation is here: https://gist.github.com/anonymous/1160c11f8ed9c29b9184325191a3a63b It starts a server thread, then starts a client that makes a connection and then writes nothing, to simulate a "bad" client and to provoke a "timeout" condition.
[snip]
With multiple threads, the handleReadTimeout/handleReadComplete callbacks are run on other threads. So the while loop here just blocks, as there is never anything to run. That's my surmise of what's going on anyway. I've experimented with strands to try and force it all onto the same thread, but so far failed (if the above code is called in the context of the same strand, it just seems to block the handleReadTimeout and handleReadComplete callbacks from being called).
Strands don't force it to the same thread, they just force the handlers to not be run concurrently. Anyway, I found I can make your example work if you add a separate io_service to execute the handlers for the blocking connection. I believe all the example solutions that you linked to also made the assumption that you wouldn't try to call the blocking reads from within the io_service as well. --- boost_asio_read_timeout 2018-03-16 08:15:18.877050171 +0100 +++ asio2.cpp 2018-03-16 08:41:02.677071294 +0100 @@ -31,12 +31,13 @@ public: }; class BlockingConnection : public boost::enable_shared_from_this<BlockingConnection> { + boost::asio::io_service svc_; boost::asio::strand strand_; tcp::socket socket_; public: BlockingConnection(io_service& ioservice) - : strand_(ioservice), socket_(ioservice) + : strand_(svc_), socket_(svc_) {} tcp::socket& socket() { @@ -62,7 +63,8 @@ public: async_read(socket_, buffer(b), strand_.wrap(boost::bind(handleReadComplete, &ec, &bytesRead, &readComplete, &timer, _1, _2))); - while (socket_.get_io_service().run_one()) { + boost::asio::io_service::work work(svc_); + while (svc_.run_one()) { if (timedOut && readComplete) { break; } --- One thing to note: I thought I could get away with keeping BlockingConnection.socket_ in the initial io_service, but found that this will deadlock if all the threads of the initial io_service happen to be executing this code at the same time. In that case there may be no thread to service the actual 'async_read/timer' handlers (that in turn call strand_.dispatch). Moving the BlockingConnection.socket_ to BlockingConnection.svc_ fixes that.
The alternative formulation, the one I started with actually, is to do an async wait on the timer, and a normal synchronous read on the socket. The timer callback performs a cancel (or close -- I tried both) on the socket hoping to cause the socket read to error. This is the kind of thing you'd do if you were programming raw sockets. That works fine on Windows, but won't work on Linux. The documentation for cancel does say it cancels any outstanding *async* calls, but I'm surprised calling close doesn't work and cause the read to wake.
The documentation also states that socket objects are _not_ thread safe. Thats the real reason this doesn't work.
Something like this should do it (I have used streambuf mechanics, but any would suffice) You need to ensure that some other thread is servicing the io_context std::string timed_read_line(boost::asio::streambuf& buffer, boost::asio::ip::tcp::socket& sock) { namespace asio = boost::asio; using boost::system::error_code; std::condition_variable cv; std::mutex mut; error_code op_error = error_code(); bool done = false; auto get_lock = [&] { return std::unique_lockstd::mutex(mut); }; auto read_handler = [&](error_code ec, std::size_t sz) { auto lock = get_lock(); if (not done) { done = true; op_error = ec; } lock.unlock(); cv.notify_one(); }; auto timer_handler = [&](error_code ec) { auto lock = get_lock(); if (not done) { done = true; op_error = asio::error::timed_out; } }; asio::async_read_until(sock, buffer, '\n', read_handler); asio::deadline_timer timer(sock.get_io_context(), boost::posix_time::seconds(3)); timer.async_wait(timer_handler); auto lock = get_lock(); cv.wait(lock, [&]{return done; }); if (op_error) { throw boost::system::system_error(op_error); } else { std::istream is(std::addressof(buffer)); std::string result; std::getline(is, result); return result; } }; On 16 March 2018 at 07:46, Jeremi Piotrowski via Boost-users < boost-users@lists.boost.org> wrote:
On Thu, Mar 15, 2018 at 07:04:19PM +0000, Thomas Quarendon via Boost-users wrote:
The examples all revolve around the technique of starting an *async* read, then performing a nested loop of io_service::run_one. However, so far I've been unable to find a form of code that works reliably in a multithreaded environment.
I played around with this, and I don't really see how this can work reliably when called from _within_ the io_service. I don't belive the io_service was intended to be used in this re-entrant manner.
The basis for my experimentation is here: https://gist.github.com/ anonymous/1160c11f8ed9c29b9184325191a3a63b It starts a server thread, then starts a client that makes a connection and then writes nothing, to simulate a "bad" client and to provoke a "timeout" condition.
[snip]
With multiple threads, the handleReadTimeout/handleReadComplete callbacks are run on other threads. So the while loop here just blocks, as there is never anything to run. That's my surmise of what's going on anyway. I've experimented with strands to try and force it all onto the same thread, but so far failed (if the above code is called in the context of the same strand, it just seems to block the handleReadTimeout and handleReadComplete callbacks from being called).
Strands don't force it to the same thread, they just force the handlers to not be run concurrently. Anyway, I found I can make your example work if you add a separate io_service to execute the handlers for the blocking connection. I believe all the example solutions that you linked to also made the assumption that you wouldn't try to call the blocking reads from within the io_service as well.
--- boost_asio_read_timeout 2018-03-16 08:15:18.877050171 +0100 +++ asio2.cpp 2018-03-16 08:41:02.677071294 +0100 @@ -31,12 +31,13 @@ public: };
class BlockingConnection : public boost::enable_shared_from_this<BlockingConnection> { + boost::asio::io_service svc_; boost::asio::strand strand_; tcp::socket socket_; public:
BlockingConnection(io_service& ioservice) - : strand_(ioservice), socket_(ioservice) + : strand_(svc_), socket_(svc_) {}
tcp::socket& socket() { @@ -62,7 +63,8 @@ public: async_read(socket_, buffer(b), strand_.wrap(boost::bind(handleReadComplete, &ec, &bytesRead, &readComplete, &timer, _1, _2)));
- while (socket_.get_io_service().run_one()) { + boost::asio::io_service::work work(svc_); + while (svc_.run_one()) { if (timedOut && readComplete) { break; } ---
One thing to note: I thought I could get away with keeping BlockingConnection.socket_ in the initial io_service, but found that this will deadlock if all the threads of the initial io_service happen to be executing this code at the same time. In that case there may be no thread to service the actual 'async_read/timer' handlers (that in turn call strand_.dispatch). Moving the BlockingConnection.socket_ to BlockingConnection.svc_ fixes that.
The alternative formulation, the one I started with actually, is to do an async wait on the timer, and a normal synchronous read on the socket. The timer callback performs a cancel (or close -- I tried both) on the socket hoping to cause the socket read to error. This is the kind of thing you'd do if you were programming raw sockets. That works fine on Windows, but won't work on Linux. The documentation for cancel does say it cancels any outstanding *async* calls, but I'm surprised calling close doesn't work and cause the read to wake.
The documentation also states that socket objects are _not_ thread safe. Thats the real reason this doesn't work. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
Hmm, OK, thanks. Say I've got 10 server threads, and 10 clients all do the same, make a connection then don't send anything. All ten threads are going to be in the same situation, all requiring the async requests to be serviced on another thread aren't they, which then won't be possible. And it all seems very heavy for what is essentially setsockopt(SO_RCVTIMEO). Even if I don't actually create the mutex and condition variable each time. Naively I did try setting SO_RCVTIMEO on the socket and calling read_some, but it didn't appear to work. I'll have to try stepping through the code to work out why not, as that seems like what I really want to do. Thanks.
On 16 March 2018 at 12:57 Richard Hodges via Boost-users
wrote: Something like this should do it (I have used streambuf mechanics, but any would suffice)
You need to ensure that some other thread is servicing the io_context
std::string timed_read_line(boost::asio::streambuf& buffer, boost::asio::ip::tcp::socket& sock) { namespace asio = boost::asio; using boost::system::error_code;
std::condition_variable cv; std::mutex mut; error_code op_error = error_code(); bool done = false;
auto get_lock = [&] { return std::unique_lockstd::mutex(mut); };
auto read_handler = [&](error_code ec, std::size_t sz) { auto lock = get_lock(); if (not done) { done = true; op_error = ec; } lock.unlock(); cv.notify_one(); };
auto timer_handler = [&](error_code ec) { auto lock = get_lock(); if (not done) { done = true; op_error = asio::error::timed_out; } };
asio::async_read_until(sock, buffer, '\n', read_handler); asio::deadline_timer timer(sock.get_io_context(), boost::posix_time::seconds(3)); timer.async_wait(timer_handler);
auto lock = get_lock(); cv.wait(lock, [&]{return done; });
if (op_error) { throw boost::system::system_error(op_error); } else { std::istream is(std::addressof(buffer)); std::string result; std::getline(is, result); return result; } };
On 16 March 2018 at 07:46, Jeremi Piotrowski via Boost-users < boost-users@lists.boost.org> wrote:
On Thu, Mar 15, 2018 at 07:04:19PM +0000, Thomas Quarendon via Boost-users wrote:
The examples all revolve around the technique of starting an *async* read, then performing a nested loop of io_service::run_one. However, so far I've been unable to find a form of code that works reliably in a multithreaded environment.
I played around with this, and I don't really see how this can work reliably when called from _within_ the io_service. I don't belive the io_service was intended to be used in this re-entrant manner.
The basis for my experimentation is here: https://gist.github.com/ anonymous/1160c11f8ed9c29b9184325191a3a63b It starts a server thread, then starts a client that makes a connection and then writes nothing, to simulate a "bad" client and to provoke a "timeout" condition.
[snip]
With multiple threads, the handleReadTimeout/handleReadComplete callbacks are run on other threads. So the while loop here just blocks, as there is never anything to run. That's my surmise of what's going on anyway. I've experimented with strands to try and force it all onto the same thread, but so far failed (if the above code is called in the context of the same strand, it just seems to block the handleReadTimeout and handleReadComplete callbacks from being called).
Strands don't force it to the same thread, they just force the handlers to not be run concurrently. Anyway, I found I can make your example work if you add a separate io_service to execute the handlers for the blocking connection. I believe all the example solutions that you linked to also made the assumption that you wouldn't try to call the blocking reads from within the io_service as well.
--- boost_asio_read_timeout 2018-03-16 08:15:18.877050171 +0100 +++ asio2.cpp 2018-03-16 08:41:02.677071294 +0100 @@ -31,12 +31,13 @@ public: };
class BlockingConnection : public boost::enable_shared_from_this<BlockingConnection> { + boost::asio::io_service svc_; boost::asio::strand strand_; tcp::socket socket_; public:
BlockingConnection(io_service& ioservice) - : strand_(ioservice), socket_(ioservice) + : strand_(svc_), socket_(svc_) {}
tcp::socket& socket() { @@ -62,7 +63,8 @@ public: async_read(socket_, buffer(b), strand_.wrap(boost::bind(handleReadComplete, &ec, &bytesRead, &readComplete, &timer, _1, _2)));
- while (socket_.get_io_service().run_one()) { + boost::asio::io_service::work work(svc_); + while (svc_.run_one()) { if (timedOut && readComplete) { break; } ---
One thing to note: I thought I could get away with keeping BlockingConnection.socket_ in the initial io_service, but found that this will deadlock if all the threads of the initial io_service happen to be executing this code at the same time. In that case there may be no thread to service the actual 'async_read/timer' handlers (that in turn call strand_.dispatch). Moving the BlockingConnection.socket_ to BlockingConnection.svc_ fixes that.
The alternative formulation, the one I started with actually, is to do an async wait on the timer, and a normal synchronous read on the socket. The timer callback performs a cancel (or close -- I tried both) on the socket hoping to cause the socket read to error. This is the kind of thing you'd do if you were programming raw sockets. That works fine on Windows, but won't work on Linux. The documentation for cancel does say it cancels any outstanding *async* calls, but I'm surprised calling close doesn't work and cause the read to wake.
The documentation also states that socket objects are _not_ thread safe. Thats the real reason this doesn't work. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
Say I've got 10 server threads, and 10 clients all do the same, make a connection then don't send anything. All ten threads are going to be in the same situation, all requiring the async requests to be serviced on another thread aren't they, which then won't be possible.
That's correct. You'll want to ensure that there is some other thread servicing the async operations.
Naively I did try setting SO_RCVTIMEO on the socket and calling read_some, but it didn't appear to work.
asio does a lot of magic on the sockets to ensure proper operation in the context of its defined interface. You are not alone. This has been noted before. https://stackoverflow.com/questions/30410265/so-rcvtime-and-so-rcvtimeo-not-... My advice, FWIW: spin up a utility thread and protect it from blocking operations, or re-engineer the process to avoid blocking the io_context threads. On 16 March 2018 at 13:54, Thomas Quarendon via Boost-users < boost-users@lists.boost.org> wrote:
Hmm, OK, thanks. Say I've got 10 server threads, and 10 clients all do the same, make a connection then don't send anything. All ten threads are going to be in the same situation, all requiring the async requests to be serviced on another thread aren't they, which then won't be possible.
And it all seems very heavy for what is essentially setsockopt(SO_RCVTIMEO). Even if I don't actually create the mutex and condition variable each time. Naively I did try setting SO_RCVTIMEO on the socket and calling read_some, but it didn't appear to work. I'll have to try stepping through the code to work out why not, as that seems like what I really want to do.
Thanks.
On 16 March 2018 at 12:57 Richard Hodges via Boost-users < boost-users@lists.boost.org> wrote:
Something like this should do it (I have used streambuf mechanics, but any would suffice)
You need to ensure that some other thread is servicing the io_context
std::string timed_read_line(boost::asio::streambuf& buffer, boost::asio::ip::tcp::socket& sock) { namespace asio = boost::asio; using boost::system::error_code;
std::condition_variable cv; std::mutex mut; error_code op_error = error_code(); bool done = false;
auto get_lock = [&] { return std::unique_lockstd::mutex(mut); };
auto read_handler = [&](error_code ec, std::size_t sz) { auto lock = get_lock(); if (not done) { done = true; op_error = ec; } lock.unlock(); cv.notify_one(); };
auto timer_handler = [&](error_code ec) { auto lock = get_lock(); if (not done) { done = true; op_error = asio::error::timed_out; } };
asio::async_read_until(sock, buffer, '\n', read_handler); asio::deadline_timer timer(sock.get_io_context(), boost::posix_time::seconds(3)); timer.async_wait(timer_handler);
auto lock = get_lock(); cv.wait(lock, [&]{return done; });
if (op_error) { throw boost::system::system_error(op_error); } else { std::istream is(std::addressof(buffer)); std::string result; std::getline(is, result); return result; } };
On 16 March 2018 at 07:46, Jeremi Piotrowski via Boost-users < boost-users@lists.boost.org> wrote:
On Thu, Mar 15, 2018 at 07:04:19PM +0000, Thomas Quarendon via Boost-users wrote:
The examples all revolve around the technique of starting an *async* read, then performing a nested loop of io_service::run_one. However, so far I've been unable to find a form of code that works reliably in a multithreaded environment.
I played around with this, and I don't really see how this can work reliably when called from _within_ the io_service. I don't belive the io_service was intended to be used in this re-entrant manner.
The basis for my experimentation is here: https://gist.github.com/ anonymous/1160c11f8ed9c29b9184325191a3a63b It starts a server thread, then starts a client that makes a connection and then writes nothing, to simulate a "bad" client and to provoke a "timeout" condition.
[snip]
With multiple threads, the handleReadTimeout/handleReadComplete callbacks are run on other threads. So the while loop here just blocks, as there is never anything to run. That's my surmise of what's going on anyway. I've experimented with strands to try and force it all onto the same thread, but so far failed (if the above code is called in the context of the same strand, it just seems to block the handleReadTimeout and handleReadComplete callbacks from being called).
Strands don't force it to the same thread, they just force the handlers to not be run concurrently. Anyway, I found I can make your example work if you add a separate io_service to execute the handlers for the blocking connection. I believe all the example solutions that you linked to also made the assumption that you wouldn't try to call the blocking reads from within the io_service as well.
--- boost_asio_read_timeout 2018-03-16 08:15:18.877050171 +0100 +++ asio2.cpp 2018-03-16 08:41:02.677071294 +0100 @@ -31,12 +31,13 @@ public: };
class BlockingConnection : public boost::enable_shared_from_ this<BlockingConnection> { + boost::asio::io_service svc_; boost::asio::strand strand_; tcp::socket socket_; public:
BlockingConnection(io_service& ioservice) - : strand_(ioservice), socket_(ioservice) + : strand_(svc_), socket_(svc_) {}
tcp::socket& socket() { @@ -62,7 +63,8 @@ public: async_read(socket_, buffer(b), strand_.wrap(boost::bind(handleReadComplete, &ec, &bytesRead, &readComplete, &timer, _1, _2)));
- while (socket_.get_io_service().run_one()) { + boost::asio::io_service::work work(svc_); + while (svc_.run_one()) { if (timedOut && readComplete) { break; } ---
One thing to note: I thought I could get away with keeping BlockingConnection.socket_ in the initial io_service, but found that this will deadlock if all the threads of the initial io_service happen to be executing this code at the same time. In that case there may be no thread to service the actual 'async_read/timer' handlers (that in turn call strand_.dispatch). Moving the BlockingConnection.socket_ to BlockingConnection.svc_ fixes that.
The alternative formulation, the one I started with actually, is to do an async wait on the timer, and a normal synchronous read on the socket. The timer callback performs a cancel (or close -- I tried both) on the socket hoping to cause the socket read to error. This is the kind of thing you'd do if you were programming raw sockets. That works fine on Windows, but won't work on Linux. The documentation for cancel does say it cancels any outstanding *async* calls, but I'm surprised calling close doesn't work and cause the read to wake.
The documentation also states that socket objects are _not_ thread safe. Thats the real reason this doesn't work. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
My advice, FWIW: spin up a utility thread and protect it from blocking operations, or re-engineer the process to avoid blocking the io_context threads.
For plain sockets, I realise the simplest implementation is just to do a manual "select" call using the native socket handle. Check the result of "available" first, and if 0, just construct a call to the normal socket select function (i.e go round the back of asio) with a timeout. It's a bit crude, and it's a shame that you have to, but it works. What doesn't work though is attempting to do the same for an ssl socket stream. There's no way to tell whether there is data available on the stream that can be read without blocking. You can perform a select on the raw socket OK, but you are in danger of blocking when there's unprocessed data in the SSL layer that's been decrypted and not returned, or read from the socket but not yet decrypted. So a solution if you aren't interested in SSL.
Attempting to understand the implementation it feels like this could be made to work. Indeed on Windows, it DOES work. So on Windows, manually setting SO_RCVTIMEO causes the socket_ops::recv call in socket_opts::sync_recv to return a timeout error (boost::asio::error::timed_out I believe). This causes the sync_recv call to return that error, and all is well. On Linux, the error returned by the socket_ops::recv appears to be boost::asio::error::try_again, and this causes sync_recv to then call poll waiting for the socket to become ready. Since there are almost certainly other things going on in this loop, other situations and error codes to consider, I can't say for certain, but it feels like there would be some logic here that could make a receive timeout work.
On 19 March 2018 at 10:33 tom@quarendon.net wrote:
Attempting to understand the implementation it feels like this could be made to work.
Interestingly I notice that the basic_socket_streambuf class (in 1.67 at least), in accordance with the N4656 draft specification, DOES have support for timeouts. It implements the overflow and underflow calls using lower level asio calls, somewhat shadowing the implementation of socket_ops::sync_recv, but, crucially, not attempting an initial blocking read, then passing a timeout to the "poll_read" call: // Wait for socket to become ready. if (detail::socket_ops::poll_read( socket().native_handle(), 0, timeout(), ec_) < 0) This would seem to suggest that fundamentally, reading with a timeout can be made to work, as it works fine here. So, for non SSL usage, replicating the same kind of logic that basic_socket_streambuf does would seem like it would work. However, that won't work for SSL. You'd basically have to create your own socket class that did this, and then wrap that in the ssl_stream. IMHO at least, having an equivalent of the "expiry" functionality provided by basic_socket_streambuf but at the tcp::socket class level would seem desirable. It seems odd that this functionality is considered useful by the N4656 draft specification at the basic_socket_streambuf level, but not at the tcp::socket level. And looking at what basic_socket_streambuf does, it wouldn't seem like it would be that complex. Thanks.
On Mon, 2018-03-19 at 14:21 +0000, Thomas Quarendon via Boost-users wrote:
On 19 March 2018 at 10:33 tom@quarendon.net wrote:
Attempting to understand the implementation it feels like this could be made to work.
Interestingly I notice that the basic_socket_streambuf class (in 1.67 at least), in accordance with the N4656 draft specification, DOES have support for timeouts. It implements the overflow and underflow calls using lower level asio calls, somewhat shadowing the implementation of socket_ops::sync_recv, but, crucially, not attempting an initial blocking read, then passing a timeout to the "poll_read" call:
// Wait for socket to become ready. if (detail::socket_ops::poll_read( socket().native_handle(), 0, timeout(), ec_) < 0)
This would seem to suggest that fundamentally, reading with a timeout can be made to work, as it works fine here.
If this doesn't work out for you, here's another way - it relies on the fact that you can create an asio socket object from a socket handle, and that socket object can be owned by a different io_context. This allows us to reserve a thread for a separate utility context: std::string read_line_with_timeout(boost::asio::ip::tcp::socket &sock, boost::asio::streambuf &buf) { namespace asio = boost::asio; // these statics could of course be encapsulated into a service object static asio::io_context executor; static asio::io_context::work work(executor); static std::thread mythread{[&] { executor.run(); }}; auto temp_socket = asio::generic::stream_protocol::socket(executor, sock.loca l_endpoint().protocol(), dup(sock. native_handle())); auto timer = asio::deadline_timer(executor, boost::posix_time::milliseconds(3000)); std::condition_variable cv; std::mutex m; int done_count = 0; boost::system::error_code err; auto get_lock = [&] { return std::unique_lockstd::mutex(m); }; auto aborted = [](boost::system::error_code const &ec) { return ec == boost::asio::error::operation_aborted; }; auto common_handler = [&](auto ec) { if (not aborted(ec)) { auto lock = get_lock(); if (done_count++ == 0) { err = ec; boost::system::error_code sink; temp_socket.cancel(sink); timer.cancel(sink); } lock.unlock(); cv.notify_one(); } }; async_read_until(temp_socket, buf, '\n', [&](auto ec, auto&&...) { common_handler(ec); }); timer.async_wait([&](auto ec) { common_handler(ec ? ec : asio::error::timed_out); }); auto lock = get_lock(); cv.wait(lock, [&] { return done_count == 2; }); if (err) throw boost::system::system_error(err); std::istream is(&buf); std::string result; std::getline(is, result); return result; }
So, for non SSL usage, replicating the same kind of logic that basic_socket_streambuf does would seem like it would work. However, that won't work for SSL. You'd basically have to create your own socket class that did this, and then wrap that in the ssl_stream.
IMHO at least, having an equivalent of the "expiry" functionality provided by basic_socket_streambuf but at the tcp::socket class level would seem desirable. It seems odd that this functionality is considered useful by the N4656 draft specification at the basic_socket_streambuf level, but not at the tcp::socket level. And looking at what basic_socket_streambuf does, it wouldn't seem like it would be that complex.
Thanks. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
OK, thanks. It'll take me a while to understand this properly, but I *think* what you've really done is reduce it to a io_service per connection, and sure, I can see how that works. Trouble is what I want to do is a mixture of async and sync calls. So multiple connections are made, there are multiple server sockets. They all sit idle, so they are all sat on an async read waiting for traffic. Great, efficient stuff. However, writing fully async code for everything is a pain, so when a message arrives for a thread it tends to then go synchronous to make it simpler to program. So imagine an HTTP server. It's got a bunch of connections and efficiently waits on them all asynchronously. Once a request comes in one one socket it reads the header, then wraps an iostream round the POST body and passes it to an appropriate handler, so that the reader of the POST body can just naturally and efficiently read it (efficient in the sense that it doesn't all have to be buffered up before being processed). As I say, all I *really* want is setsockopt(SO_RCVTIMEO) on the socket. At the moment I don't know why that doesn't work, I might have just coded my experiment wrong. Thanks.
On 16 March 2018 at 07:46 Jeremi Piotrowski via Boost-users
wrote: On Thu, Mar 15, 2018 at 07:04:19PM +0000, Thomas Quarendon via Boost-users wrote:
The examples all revolve around the technique of starting an *async* read, then performing a nested loop of io_service::run_one. However, so far I've been unable to find a form of code that works reliably in a multithreaded environment.
I played around with this, and I don't really see how this can work reliably when called from _within_ the io_service. I don't belive the io_service was intended to be used in this re-entrant manner.
The basis for my experimentation is here: https://gist.github.com/anonymous/1160c11f8ed9c29b9184325191a3a63b It starts a server thread, then starts a client that makes a connection and then writes nothing, to simulate a "bad" client and to provoke a "timeout" condition.
[snip]
With multiple threads, the handleReadTimeout/handleReadComplete callbacks are run on other threads. So the while loop here just blocks, as there is never anything to run. That's my surmise of what's going on anyway. I've experimented with strands to try and force it all onto the same thread, but so far failed (if the above code is called in the context of the same strand, it just seems to block the handleReadTimeout and handleReadComplete callbacks from being called).
Strands don't force it to the same thread, they just force the handlers to not be run concurrently. Anyway, I found I can make your example work if you add a separate io_service to execute the handlers for the blocking connection. I believe all the example solutions that you linked to also made the assumption that you wouldn't try to call the blocking reads from within the io_service as well.
--- boost_asio_read_timeout 2018-03-16 08:15:18.877050171 +0100 +++ asio2.cpp 2018-03-16 08:41:02.677071294 +0100 @@ -31,12 +31,13 @@ public: };
class BlockingConnection : public boost::enable_shared_from_this<BlockingConnection> { + boost::asio::io_service svc_; boost::asio::strand strand_; tcp::socket socket_; public:
BlockingConnection(io_service& ioservice) - : strand_(ioservice), socket_(ioservice) + : strand_(svc_), socket_(svc_) {}
tcp::socket& socket() { @@ -62,7 +63,8 @@ public: async_read(socket_, buffer(b), strand_.wrap(boost::bind(handleReadComplete, &ec, &bytesRead, &readComplete, &timer, _1, _2)));
- while (socket_.get_io_service().run_one()) { + boost::asio::io_service::work work(svc_); + while (svc_.run_one()) { if (timedOut && readComplete) { break; } ---
One thing to note: I thought I could get away with keeping BlockingConnection.socket_ in the initial io_service, but found that this will deadlock if all the threads of the initial io_service happen to be executing this code at the same time. In that case there may be no thread to service the actual 'async_read/timer' handlers (that in turn call strand_.dispatch). Moving the BlockingConnection.socket_ to BlockingConnection.svc_ fixes that.
The alternative formulation, the one I started with actually, is to do an async wait on the timer, and a normal synchronous read on the socket. The timer callback performs a cancel (or close -- I tried both) on the socket hoping to cause the socket read to error. This is the kind of thing you'd do if you were programming raw sockets. That works fine on Windows, but won't work on Linux. The documentation for cancel does say it cancels any outstanding *async* calls, but I'm surprised calling close doesn't work and cause the read to wake.
The documentation also states that socket objects are _not_ thread safe. Thats the real reason this doesn't work. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
On 16/03/2018 08:04, Thomas Quarendon wrote:
I've got a boost::asio based server, and generally it's working well. The server runs multiple threads, each simply running the io_service::run method on the same io_service object. It accepts multiple connections and processes multiple concurrent sockets. It uses a strand per connection to ensure that the data for each connection is processed strictly sequentially.
For various reasons it uses a mixture of async and sync reads. The trouble I'm having is implementing a timeout on the synchronous (blocking) reads. I want to protect the server in the case of a client sending an erroneous or malicious packet for example, and make sure a read doesn't block forever. This is probably going to seem like an overly flippant answer, but I do firmly believe that it is the *only* correct answer.
Don't mix async and sync. Once you go async, you have to go async "all the way down". This means that your async handlers must never make blocking calls themselves. If your sync and async operations are both on the same socket, then you must convert them all to async -- if you don't like the async-callback code style that results, look into the coroutine style instead, which looks more like sync code while still behaving like async code. If your sync calls operate on different objects and you can't convert those blocking calls to async calls (eg. they're calling some library API that doesn't provide async), then you should make a "sync worker thread" that has its own separate io_service, and have your async workers post jobs to this service, then post back completions to the original io_service once the task is done. It's up to you how many of these sync worker threads to create, ranging from one global one (easy but will make everything wait for everyone else's blocking operations), through to a small threadpool, through to one per connection (also easy but risks thread explosion). There's no One True Answer™, it will depend on your application's expected workload and connection count.
This is probably going to seem like an overly flippant answer, but I do firmly believe that it is the *only* correct answer.
Not at all, you're quite right, and I'm aiming in that direction. My investigations have bifurcated in a way. On the one hand, I'm interested from an academic point of view in why it is I can't do a sync read with timeout. So lets say I'm using asio to write a more traditionally structured synchronous blocking thread-per-connection model server. Sync all the way. I can't implement a read timeout, well, not in a way that's directly provided by the library. The only ways to do it require mixing in some async, and the only methods that exist are really workarounds from what I can see. Yet if I want to wrap an iostream around a socket using basic_socket_streambuf, and perhaps code it that way I *CAN* do a sync read with timeout, directly supported by the library. Given the intention of the draft standard that backs asio, this seems like an oversight to me. On the other hand, how should I be implementing my server? As you say, a way of mixing sync and async is to have a fully async "front end" that would deal with bad clients and sanitise the input. This could run on only thread and handle many thousands of concurrent connections quite easily, as it would never block. What it would then want to do though is use an async write to put data onto an internal pipe. Then a dedicated worker thread can use a sychronous read to read off that queue. The front end can close it's end of the pipe if it has detected a bad/slow client and worker thread will see an EOF, there's no need for the worker thread to worry about read with timeout. The nice thing is that you get natural flow control, the front end won't read more off the input socket until the write to the internal pipe has completed, which it will only do if there's space, so you won't have to implement flow control manually. On Linux you can do that, since you can create an anonymous pipe easily and wrap it with asio. You can't create anonymous pipes on Windows though, so you have to create named pipes in the operating system, which is going to add overhead. What you would *really* want is a fully user mode pipe implemented purely within asio and not going down to the kernel at all, apart from use of mutex and condition variable. But then I've just invented zeromq.
This ticket may explain why things are the way they are. Personally, we use deadline_timer and it works (I think) https://svn.boost.org/trac10/ticket/2832 On Tue, Mar 20, 2018 at 2:16 PM, Thomas Quarendon via Boost-users < boost-users@lists.boost.org> wrote:
This is probably going to seem like an overly flippant answer, but I do firmly believe that it is the *only* correct answer.
Not at all, you're quite right, and I'm aiming in that direction.
My investigations have bifurcated in a way. On the one hand, I'm interested from an academic point of view in why it is I can't do a sync read with timeout. So lets say I'm using asio to write a more traditionally structured synchronous blocking thread-per-connection model server. Sync all the way. I can't implement a read timeout, well, not in a way that's directly provided by the library. The only ways to do it require mixing in some async, and the only methods that exist are really workarounds from what I can see. Yet if I want to wrap an iostream around a socket using basic_socket_streambuf, and perhaps code it that way I *CAN* do a sync read with timeout, directly supported by the library. Given the intention of the draft standard that backs asio, this seems like an oversight to me.
On the other hand, how should I be implementing my server? As you say, a way of mixing sync and async is to have a fully async "front end" that would deal with bad clients and sanitise the input. This could run on only thread and handle many thousands of concurrent connections quite easily, as it would never block. What it would then want to do though is use an async write to put data onto an internal pipe. Then a dedicated worker thread can use a sychronous read to read off that queue. The front end can close it's end of the pipe if it has detected a bad/slow client and worker thread will see an EOF, there's no need for the worker thread to worry about read with timeout. The nice thing is that you get natural flow control, the front end won't read more off the input socket until the write to the internal pipe has completed, which it will only do if there's space, so you won't have to implement flow control manually.
On Linux you can do that, since you can create an anonymous pipe easily and wrap it with asio. You can't create anonymous pipes on Windows though, so you have to create named pipes in the operating system, which is going to add overhead. What you would *really* want is a fully user mode pipe implemented purely within asio and not going down to the kernel at all, apart from use of mutex and condition variable. But then I've just invented zeromq. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
This ticket may explain why things are the way they are.
Thanks, I hadn't found that. I've read the thread, though so far I've not understood the actual reason why this isn't provided. The original suggestion was rejected, and fine, that wasn't the way to do it. The author didn't dismiss the concept, just the form. I've added my plea though.
Personally, we use deadline_timer and it works (I think) As I say, so far I've been unable to find a form that works properly with multiple threads, multiple clients etc. It's daft, when a simple modification to the library would provide a built in read deadline. It's doable. It's DONE in the case of a socket_streambuf, just not for a socket. Frustrating.
Then a dedicated worker thread can use a sychronous read to read off that queue.
This argues for a separate io_context for use by that thread. On Tue, 2018-03-20 at 14:16 +0000, Thomas Quarendon via Boost-users wrote:
This is probably going to seem like an overly flippant answer, but I do firmly believe that it is the *only* correct answer.
Not at all, you're quite right, and I'm aiming in that direction.
My investigations have bifurcated in a way. On the one hand, I'm interested from an academic point of view in why it is I can't do a sync read with timeout. So lets say I'm using asio to write a more traditionally structured synchronous blocking thread-per-connection model server. Sync all the way. I can't implement a read timeout, well, not in a way that's directly provided by the library. The only ways to do it require mixing in some async, and the only methods that exist are really workarounds from what I can see. Yet if I want to wrap an iostream around a socket using basic_socket_streambuf, and perhaps code it that way I *CAN* do a sync read with timeout, directly supported by the library. Given the intention of the draft standard that backs asio, this seems like an oversight to me.
On the other hand, how should I be implementing my server? As you say, a way of mixing sync and async is to have a fully async "front end" that would deal with bad clients and sanitise the input. This could run on only thread and handle many thousands of concurrent connections quite easily, as it would never block. What it would then want to do though is use an async write to put data onto an internal pipe. Then a dedicated worker thread can use a sychronous read to read off that queue. The front end can close it's end of the pipe if it has detected a bad/slow client and worker thread will see an EOF, there's no need for the worker thread to worry about read with timeout. The nice thing is that you get natural flow control, the front end won't read more off the input socket until the write to the internal pipe has completed, which it will only do if there's space, so you won't have to implement flow control manually.
On Linux you can do that, since you can create an anonymous pipe easily and wrap it with asio. You can't create anonymous pipes on Windows though, so you have to create named pipes in the operating system, which is going to add overhead. What you would *really* want is a fully user mode pipe implemented purely within asio and not going down to the kernel at all, apart from use of mutex and condition variable. But then I've just invented zeromq. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
This argues for a separate io_context for use by that thread.
The arrangement of the io_services in the backend in that hypothetical design wouldn't really matter as everything would be done synchronously by that backend worker thread anyway. So it may as well be io_service per pipe (I've no idea whether there are inefficiencies either way, but I don't think it would make any difference to the way it behaved).
On 21/03/2018 03:16, Thomas Quarendon wrote:
On Linux you can do that, since you can create an anonymous pipe easily and wrap it with asio. You can't create anonymous pipes on Windows though, so you have to create named pipes in the operating system, which is going to add overhead. What you would *really* want is a fully user mode pipe implemented purely within asio and not going down to the kernel at all, apart from use of mutex and condition variable. But then I've just invented zeromq.
https://msdn.microsoft.com/en-us/library/windows/desktop/aa365152.aspx There are even ways to pass anonymous pipe handles explicitly to another already-running process (with cooperation on both sides), which I don't think is possible in Linux. Granted, this is still an OS pipe, but there isn't that much overhead, especially within a single process. Still, there's no reason to create pipes for inter-thread communication. You're in a shared memory space, and ASIO is itself a queued-task management system, and you can store arbitrary state for each task (including completely unique structures for each task, if you really want). Just store the data you want to process directly in the object that you're posting to the other io_service, then store the result in the object that you post back for completion. If you want to stick with a data-stream structure, you could use a Boost.LockFree queue, or a regular queue protected by mutex, or any other data structure that fits your purpose.
https://msdn.microsoft.com/en-us/library/windows/desktop/aa365152.aspx
Yes, sorry. Yes, you can create pipes in Windows, but asio doesn't support Windows anonymous pipes or concole handles, as they don't support completion ports. But yes, you can create a named port with a unique name (which is what CreatePipe does).
You're in a shared memory space, and ASIO is itself a queued-task management system, and you can store arbitrary state for each task (including completely unique structures for each task, if you really want). Just store the data you want to process directly in the object that you're posting to the other io_service, then store the result in the object that you post back for completion.
Whilst this is true, what it doesn't give you is any flow control. So consider implementing something like an HTTP upload. Front end loop will go as fast as it can reading off the input socket, each time into a new buffer. For each read it then posts some kind of job to the io_service, wrapped in a strand to ensure sequential operation. If the backend doesn't keep up, you end up with an enormous backlog within the io_service, and enormous memory usage. What you really want is an async write to a pipe to a backend. That way, when the pipe is full, the write doesn't complete. Since it doesn't complete, no new data will be read from the socket, so the TCP buffers backup, client write them blocks. Perfect. Anyway, I'm going off by pattern anyway. I don't see it offers any advantage over just a synchronous thread per connection design. Using async initially, then sync to read the "message" works well, it's a nice design, if only I could put a timeout on the synchronous reads.
On 22/03/2018 22:56, Thomas Quarendon wrote:
Yes, sorry. Yes, you can create pipes in Windows, but asio doesn't support Windows anonymous pipes or concole handles, as they don't support completion ports. But yes, you can create a named port with a unique name (which is what CreatePipe does).
Anonymous pipes created with CreatePipe don't support overlapped I/O, true; but as you said yourself they're just named pipes with a unique name, and CreateNamedPipe *can* support overlapped I/O, so you can make up your own name and use them with a windows::stream_handle to do async reads and writes. It's a little more fiddly but you can also compose your own I/O objects and operations around other APIs, presumably such as TransactNamedPipe. I have some code that turns a shared memory block (with events) into a simple one-slot interprocess ASIO-compatible async mailbox, for example; the key trick was to use windows::object_handle::async_wait to "wait" for something to happen and then read the data synchronously in the wait completion handler (because at that point it is guaranteed to complete without blocking).
Using async initially, then sync to read the "message" works well, it's a nice design, if only I could put a timeout on the synchronous reads.
You can use a deadline_timer and cancel the synchronous read if it trips. It's a bit fugly though and is subject to races. It's much nicer to use async.
You can use a deadline_timer and cancel the synchronous read if it trips. It's a bit fugly though and is subject to races. It's much nicer to use async.
The point is though that I don't think this works. This is what I started with. It works on Windows OK. But on Linux, calling "cancel" or "close" on a socket doesn't cancel a synchronous read call. This is what started me down this whole route.
On Fri, 23 Mar 2018 at 09:07, Thomas Quarendon via Boost-users < boost-users@lists.boost.org> wrote:
You can use a deadline_timer and cancel the synchronous read if it trips. It's a bit fugly though and is subject to races. It's much nicer to use async.
The point is though that I don't think this works. This is what I started with. It works on Windows OK. But on Linux, calling "cancel" or "close" on a socket doesn't cancel a synchronous read call. This is what started me down this whole route.
This seems unlikely. I have been using asio in production code on Linux for 4 years. Can you post a mcve so I can test?
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
This seems unlikely. I have been using asio in production code on Linux for 4 years. Can you post a mcve so I can test? Yes, the code I started this thread with: https://gist.github.com/tomq42/331b8d48110c5025e0fce93e689bd5a3
I don't think this is a surprise. As far as I understand it, "cancel" isn't expected to cancel a *synchronous* read from the socket. It's more of a surprise to me that calling close on the socket doesn't have the effect of causing the read to return with an error. Both of these things work on Windows, and I started out on Windows, so the code was all fine there.
forgive me. I read "a synchronous call" as "asynchronous call".
Of course the correct way to "cancel" a sync call in linux is to raise a
signal, which should cause the socket's read to return with EINTR.
But before I realised my mistake, I wrote this little test to prove that
async calls are canclled :) Maybe someone will find it useful...
#include <cstdlib>
#include
*read handler error: Operation canceled*
On 23 March 2018 at 08:44, Thomas Quarendon via Boost-users < boost-users@lists.boost.org> wrote:
This seems unlikely. I have been using asio in production code on Linux for 4 years. Can you post a mcve so I can test? Yes, the code I started this thread with: https://gist.github.com/tomq42/331b8d48110c5025e0fce93e689bd5a3
I don't think this is a surprise. As far as I understand it, "cancel" isn't expected to cancel a *synchronous* read from the socket. It's more of a surprise to me that calling close on the socket doesn't have the effect of causing the read to return with an error. Both of these things work on Windows, and I started out on Windows, so the code was all fine there. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
Of course the correct way to "cancel" a sync call in linux is to raise a signal, which should cause the socket's read to return with EINTR.
Not sure I understand. So is a deadline timer callback raises something like SIG_USR1, then a boost asio call to socket.read_some will return with an error code? If that's so, then that gives an easy way to implement a timeout doesn't it? My suspicion is that although the raw socket recv call may return that EINTR, I won't see that at the boost::asio level, but I'll give it a go. I may be misunderstanding though.
Of course the correct way to "cancel" a sync call in linux is to raise a signal, which should cause the socket's read to return with EINTR.
Hmm. Can't see any evidence of it working in my test. I've made my deadline callback do a raise(SIGALRM). If I don't have a handler for that signal, then my process terminates, which I guess is fair enough. If I have a (boost asio) handler and essentially ignore the signal (just print "signal handled" from my signal handler), the signal is handled, but my read doesn't return. My recollection of the boost asio code for socket recv is that it doesn't simply perform a raw socket recv call, but rather it loops in case of error, with a "poll" call in there too. So while that technique may work OK for a raw socket, it doesn't work with asio.
You should probably not use boost to handle signals. The page about signalfd (https://linux.die.net/man/2/signalfd) has this remark: Normally, the set of signals to be received via the file descriptor should be blocked using sigprocmask(2), to prevent the signals being handled according to their default dispositions. So I would expect that boost does exactly this and therefore the synchronous read doesn't get interrupted. (If boost didn't block the signal, the behavior would be "default disposition", i.e., termination as you have observed).
So try the following: install a signal handler for SIGALRM (or any other) and do NOT wait for it using boost. Have the handler just return and see whether blocking recv got interrupted. However... it's not that simple. Signals and threads don't play nicely together: a signal will be delivered to an arbitrary thread that didn't block it.
So you should have a variable holding the thread id of the thread running your io_context::run, and from within the signal handler:
1.
Check whether the thread executing the handler is different from the thread running io_context::run
2.
If so, use pthread_kill to re-deliver the signal to the correct thread.
3.
Otherwise, do nothing.
Signal handlers are global to the process, so w/o the check for the thread you can end up in an endless loop.
Yeah, it's a mess compared to Windows CancelSynchronousIO
________________________________
From: Boost-users
Of course the correct way to "cancel" a sync call in linux is to raise a signal, which should cause the socket's read to return with EINTR.
Hmm. Can't see any evidence of it working in my test. I've made my deadline callback do a raise(SIGALRM). If I don't have a handler for that signal, then my process terminates, which I guess is fair enough. If I have a (boost asio) handler and essentially ignore the signal (just print "signal handled" from my signal handler), the signal is handled, but my read doesn't return. My recollection of the boost asio code for socket recv is that it doesn't simply perform a raw socket recv call, but rather it loops in case of error, with a "poll" call in there too. So while that technique may work OK for a raw socket, it doesn't work with asio. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
On Fri, Mar 23, 2018 at 11:42:02AM +0000, Thomas Quarendon via Boost-users wrote:
Yeah, it's a mess compared to Windows CancelSynchronousIO Quite. Yuck!
All I want is to be able to pass a timeout to the poll call that asio makes on my behalf! The operating system gives me what I want, I just can't get at it.
What if you use boost::asio's future support. Your BlockingConnection::start() method then looks like this: void BlockingConnection::start() { std::vector<char> b(1024); std::futurestd::size_t future = socket_.async_read_some( buffer(b), boost::asio::use_future); if (future.wait_for(std::chrono::milliseconds{1000}) == std::future_status::timeout) { std::cout << "BlockingConnection terminating " << std::endl; socket_.close(); } else { // get() may throw std::cout << "read bytes: " << future.get() << std::endl; } } This seems to be the closest to what you need. But it still requires that you not wait_for() from within the same io_service. So what you could do is have two io_services, one for all actual async operations with a thread blocked on run(), and a second io_service with as many threads as connections that you want to handle concurrently with sync reads. You would accept in the first io_service, and then .post(BlockingConnection::start) into the second one. Seems like this could work.
participants (6)
-
Gavin Lambert
-
james
-
Jeremi Piotrowski
-
Richard Hodges
-
Stian Zeljko Vrba
-
tom@quarendon.net