2015-08-15 23:14 GMT-03:00 Lee Clagett
But this is not defined by the http::Socket concept. Its not possible to write a function that takes any http::Socket concept and limits the number of bytes being pushed into the container. A conforming http::Socket implementation is currently allowed to keep resizing the container as necessary to add data (even until payload end), and I thought the prevention of that scenario was being touted as a benefit of Boost.Http. Adding a size_t parameter or a fixed buffer to `async_read_some` is a strong signal of intent to implementors, and a weaker one would be a statement in the documentation that a conforming implementation of the concept can only read/insert an unspecified fixed number of bytes before invoking the callback.
You can pass a message which will just drop data (a stub push_back function) and it will work with any Socket. Limit the number of bytes being pushed is not a property that any Socket can guarantee. You cannot write that function. The Socket might just not be able to save the excessive read to later delivery and must fill it now. Like I said, a trait could help you with a standardized API for **capable** Sockets. The memory requirements are affected by the max header size. With a
push/pull parser it is possible to rip out information in a fixed amount of memory. The HTTP parser this library is using is a good example - it never allocates memory, does not require the client allocate any memory, and the #define for the max header size does _not_ change the size requirements of its data structures. It keeps necessary state and information in a fixed amount of space, yet is still able to know whether transfer-encoding: chunked was sent, etc.
And this is a very specific parser, it won't work with alternative HTTP backends. The initial source of my parser thoughts were how to combine ideas from
boost::spirit into a HTTP system. A client could do a POST/PUT/DELETE, and then issue `msg_socket.async_read(http::parse_response_code(), void(uint16_t, error_code))` which would construct a HTTP parser that extracts the response code from the server, tracks a minimal set of headers (content-length, transfer-encoding, connection), yet still operates in a fixed memory budget even if max header / max payload were size_t::max. I still don't see how this is possible without a notification parser exposed somewhere in the design. Again, I'm not downvoting Boost.Http because it lacks this capability. I'm not sure of the demand for such a complicated library just to manipulate Http sockets.
I can provide the parser separately in the future, and this parser will be useful only to the HTTP wire format, not useful if you want to target alternative HTTP backends. It's a trade-off the user will have to make. -- VinÃcius dos Santos Oliveira https://about.me/vinipsmaker