On Sun, Nov 24, 2019 at 3:53 AM Bjorn Reese via Boost
This library also supports both incremental parsing and incremental serialization using caller-provided buffers, an important use-case for building high performing network programs. To my knowledge no other JSON library supports this.
Have been doing this for years:
http://breese.github.io/trial/protocol/trial_protocol/core.html
I'm not seeing where trial.protocol has incremental algorithms, perhaps you can show me? trial::protocol::json::basic_reader constructs with the complete input: https://github.com/breese/trial.protocol/blob/4bdf90747944f24b61aa9dbde92d8f... There is no API to provide additional buffers. By "incremental" I mean an "online algorithm", i.e. the entire input does not need to be presented at once. For example, this is what it might look like using boost.json to incrementally parse a JSON from a socket: json::value parse( net::ip::tcp::socket& sock ) { error_code ec; json::parser p; p.start(); for(;;) { char buf[4096]; auto const n = sock.read_some( net::mutable_buffer(buf, sizeof(buf)), ec); if(ec == net::error::eof) break; if(ec) throw system_error(ec); p.write(buf, n, ec); if(ec) throw system_error(ec); } p.finish(); return p.release(); } Serialization functions similarly. The caller provides a buffer, and the implementation attempts to fill the buffer with serialized JSON. If the buffer is not big enough, subsequent calls may be made to retrieve the rest of the serialized output.
So most (all?) values are stored on the heap?
json::object, json::array, and json::string use dynamic allocations to store elements. The string has a small buffer optimization.
Notice that with the right design we can support all of these use cases without making the library more complex.
Respectfully I disagree. The parser/serializer is what it is, and designed to be optimal for the intended use case which is going to and from the DOM (the `json::value`). Perhaps there is another, "right design" which makes a different set of tradeoffs, but that can be the subject of a different library.
If your design becomes a Boost library, then there will be very little incentive to include yet another JSON library to handle the remaining use cases. That is why I am asking these questions up-front.
I disagree. Again the central premise here is that there is no ideal JSON library which can suit all needs. I believe this is why Boost does not yet have a JSON library. Optimizing one use-case necessarily comes at the expense of others. At the very least, the inversion of the flow of control (i.e. a parser which returns a token at a time) advantages one use case and disadvantages others (such as working as an online algorithm). There are other tradeoffs. Because my library addresses a narrow set of uses, there should be more than enough incentive for other pioneers to step in and fill the leftover gaps. And they can do so knowing that they do not need to be everything to everyone. Regards