This ticket may explain why things are the way they are. Personally, we use deadline_timer and it works (I think) https://svn.boost.org/trac10/ticket/2832 On Tue, Mar 20, 2018 at 2:16 PM, Thomas Quarendon via Boost-users < boost-users@lists.boost.org> wrote:
This is probably going to seem like an overly flippant answer, but I do firmly believe that it is the *only* correct answer.
Not at all, you're quite right, and I'm aiming in that direction.
My investigations have bifurcated in a way. On the one hand, I'm interested from an academic point of view in why it is I can't do a sync read with timeout. So lets say I'm using asio to write a more traditionally structured synchronous blocking thread-per-connection model server. Sync all the way. I can't implement a read timeout, well, not in a way that's directly provided by the library. The only ways to do it require mixing in some async, and the only methods that exist are really workarounds from what I can see. Yet if I want to wrap an iostream around a socket using basic_socket_streambuf, and perhaps code it that way I *CAN* do a sync read with timeout, directly supported by the library. Given the intention of the draft standard that backs asio, this seems like an oversight to me.
On the other hand, how should I be implementing my server? As you say, a way of mixing sync and async is to have a fully async "front end" that would deal with bad clients and sanitise the input. This could run on only thread and handle many thousands of concurrent connections quite easily, as it would never block. What it would then want to do though is use an async write to put data onto an internal pipe. Then a dedicated worker thread can use a sychronous read to read off that queue. The front end can close it's end of the pipe if it has detected a bad/slow client and worker thread will see an EOF, there's no need for the worker thread to worry about read with timeout. The nice thing is that you get natural flow control, the front end won't read more off the input socket until the write to the internal pipe has completed, which it will only do if there's space, so you won't have to implement flow control manually.
On Linux you can do that, since you can create an anonymous pipe easily and wrap it with asio. You can't create anonymous pipes on Windows though, so you have to create named pipes in the operating system, which is going to add overhead. What you would *really* want is a fully user mode pipe implemented purely within asio and not going down to the kernel at all, apart from use of mutex and condition variable. But then I've just invented zeromq. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users