On Sun, Oct 8, 2017 at 5:38 PM, Vinícius dos Santos Oliveira
Now, moving on... given that you have __not__ answered how your parser's design[1] compares to the parser I've developed
I'll try to provide more clarity. `beast::basic_parser` is designed
with standardization in mind, as I intent to eventually propose Beast
for the standard library. Therefore, I have made the interface as
simple as possible, exposing only the minimum required necessary to
achieve these goals which the majority of users want:
* Read the header first, if desired
* Feed Asio style buffer sequences into the parser
* Set independent limits on the number of header and body octets
* Optional fine-grained receipt of chunked body data and metadata
The design of basic_parser (the use of CRTP in particular) is meant to
support the case where a user implements their own Fields container,
or if they want a bit more custom handling of the fields (for example,
to avoid storing them).
Like all design choices, tradeoffs are made. The details of parsing
are exposed only to the derived class. Complexities are hidden from
the public-facing interface of `basic_parser`. Implementing a stream
algorithm that operates on the parser is a straightforward process:
template<
class SyncReadStream, class DynamicBuffer,
bool isRequest, class Derived>
std::size_t read(SyncReadStream& stream, DynamicBuffer& buffer,
basic_parser
[1] at most it hides behind a “multiple people/stakeholders agree with me” __shield__
There's no hiding going on here. One can only measure the relative success of designs based on the feedback from users. They opened issues, and I addressed their use-cases, sometimes with a considerable amount of iteration and back-and-forth as you have seen in the GitHub issues quoted in the previous message. Now, I don't know if the sampling of users that have participated in Beast's design are representative of the entire C++ community. However, I do know one thing - if GitHub stars are any measure of the sample size of the participants, then Beast is off to a good start. Here's a graph showing the number stars over time received by boostorg/beast, tufao, and Boost.Http (the last 2 libraries having you as the author I believe): http://www.timqian.com/star-history/#boostorg/beast&BoostGSoC14/boost.http&vinipsmaker/tufao (Note that the HTTP+WebSocket version of Beast was released in May 2016). We need to be careful interpreting results like this of course so perhaps we should look at different metrics. Here are the links to the number of closed issues for Beast, tufao, and Boost.Http: 502 Closed issues in Beast: https://github.com/boostorg/beast/issues?q=is%3Aissue+is%3Aclosed 38 Closed issues in tufao https://github.com/vinipsmaker/tufao/issues?q=is%3Aissue+is%3Aclosed 13 Closed issues in Boost.HTTP, from 6 unique users not including the author https://github.com/BoostGSoC14/boost.http/issues?q=is%3Aissue+is%3Aclosed Again we have to be careful interpreting results like this. But it sure looks like there is a lot of user participation in Beast. If approval from a large number of stakeholders is not a compelling design motivator then what is?
This tutorial is full of “design implications” blocks where I take the time to dissect what it means to go with each choice.
Thus far, no one has asked for more fine grained access to incoming HTTP tokens in the manner of `code::method` and `code::request_target`. If this becomes something that users consistently ask for, it can be done by changing the requirements for the derived class for `basic_parser`. This way, details about HTTP parsing which most people don't care about will not leak into the beast::http:: namespace. Such a change would not affect existing stream algorithms on parsers. Thanks