Sorry for any duplicates and top posting, but I didn't reply-all, so boost
got dropped (which was probably more important). And this likely ruins some
mail clients (again sorry!).
On Fri, Aug 7, 2015 at 11:10 PM, Lee Clagett
On Fri, Aug 7, 2015 at 6:29 PM, VinÃcius dos Santos Oliveira < vinipsmaker@users.sf.net> wrote:
2015-08-07 12:53 GMT-03:00 Niall Douglas
: I do not believe Http is ready for peer review for these reasons, and I personally would urge you withdraw it until you have fixed these issues and then come back to us. If you do not withdraw it, I will vote for rejection.
...[large snip]...
* I don't understand why you cannot issue more than one async read at a time nor async write at a time. In something like pipelined HTTP that seems like a very common pattern. I smell a potential design flaw, and while the docs mention a "queue_socket" I can't see such a thing in the reference section.
Too much synchronization and buffering.
The pipeline has the following requests:
A => B => C
If I were to allow replies to request B, the design would be more complex, increasing the difficult to implement alternative backends and everyone's life. Also, the reply to request B cannot actually be sent while the reply to request A is on hold. This means that the whole response to request B would need buffering and if response B is a video live stream, this means an implicit (the user cannot know) crash after an unfortunate memory exhaustion. I couldn't claim a **lightweight** server if this excessive buffering was to occur. I couldn't claim the project is appropriate for embedded devices if I had chosen the buffering approach. It's a trade-off. I should write about it in the "design choice" of the documentation.
HTTP/2.0 does have proper multiplexing and it can be more useful to respond several requests without "head of line blocking".
A queue socket is a concept, I don't provide one. I can look into how make the documentation less confusing by adding a page exclusively dedicated to explain the problem it solves. Would that work for you?
FWIW it should be possible to do client and server side pipelining with this design right now. The reads and writes of the HTTP Socket concept should be (and currently are by `basic_socket`) handled independently. Since the notification for a write completion is based on the lower layer write completion, and NOT whether a HTTP response was received, multiple (pipelined) requests should be possible by a client. Similarly, this design should allow servers to handle pipelined requests in parrallel. Providing a framework for client or server pipelining will require more work, and should probably not be a part of the Socket concept. In fact -
I think the ServerSocket concept should be removed entirely. The `http::basic_socket<...>` models both concepts, so theres lots of server specific code in it, which seems confusing to me (is this really a "basic" socket?). I think standalone functions for typical client and server operations can be implemented as "composed" operations similar to `asio::async_write`. The documentation will have to be explicit on what the composed function is doing so that users know whether the Socket concept has outstanding reads or writes (and therefore cannot do certain operations until the handler is invoked). In fact, if there is a server operation that _cannot_ be done with the Socket concept, then the concept probably needs to be re-designed.
Lee