Re: [boost] [review] [json] My JSON review
On Mon, Sep 21, 2020 at 10:29 AM Mathias Gaunard
In both cases, I'd like to read/write my data from/to JSON with the same framework.
Why? What specifically, is the requirement here?
What I'd like is a way to describe how my C++ types map to a key-value structure with normalized types so that I can easily convert my objects back and forth through a structured self-describing and human-readable interchange format.
Right, but what I'm asking you is: *specifically* in what way would a framework that offers both JSON DOM and JSON Serialization be "consistent?" Can you show in terms of declarations, what that would look like? In other words I'm asking you to show using example code how putting these two separate concerns in a single library would offer benefits over having them in separate libraries. Thanks
On 21. Sep 2020, at 19:44, Vinnie Falco via Boost
wrote: On Mon, Sep 21, 2020 at 10:29 AM Mathias Gaunard
wrote: In both cases, I'd like to read/write my data from/to JSON with the same framework.
Why? What specifically, is the requirement here?
What I'd like is a way to describe how my C++ types map to a key-value structure with normalized types so that I can easily convert my objects back and forth through a structured self-describing and human-readable interchange format.
Right, but what I'm asking you is: *specifically* in what way would a framework that offers both JSON DOM and JSON Serialization be "consistent?" Can you show in terms of declarations, what that would look like? In other words I'm asking you to show using example code how putting these two separate concerns in a single library would offer benefits over having them in separate libraries.
The Boost.Serialization framework is able to do both. The two things needs are a) a JSON archive that follows the Archive concept b) a serialize function for json::value Once you have these two, you can convert the DOM type and/or any custom type to JSON and back.
On 21. Sep 2020, at 19:57, Hans Dembinski
wrote: On 21. Sep 2020, at 19:44, Vinnie Falco via Boost
wrote: On Mon, Sep 21, 2020 at 10:29 AM Mathias Gaunard
wrote: In both cases, I'd like to read/write my data from/to JSON with the same framework.
Why? What specifically, is the requirement here?
What I'd like is a way to describe how my C++ types map to a key-value structure with normalized types so that I can easily convert my objects back and forth through a structured self-describing and human-readable interchange format.
Right, but what I'm asking you is: *specifically* in what way would a framework that offers both JSON DOM and JSON Serialization be "consistent?" Can you show in terms of declarations, what that would look like? In other words I'm asking you to show using example code how putting these two separate concerns in a single library would offer benefits over having them in separate libraries.
The Boost.Serialization framework is able to do both. The two things needs are a) a JSON archive that follows the Archive concept b) a serialize function for json::value
Once you have these two, you can convert the DOM type and/or any custom type to JSON and back.
On second thought, the Archive concept probably needs an extension to work with json::value. While writing is no problem, reading is a challenge. For reading, Boost.Serialization usually relies on the fact that the type that is going to be read next from the archive is statically known. In case of a boost::variant, a polymorphic type similar to json::value, it reads a type id first (some integer) and then calls the appropriate code based on that value. While that would also work for the json::value type, we obviously don't want to store explicit type ids in JSON. So the type info would have to come from the Archive itself in this case, which probably could do some look-ahead into the stream to provide info like "then next type in the stream is a number/string/array of 3 elements". Bottom line, it is not so easy as I thought.
Em seg., 21 de set. de 2020 às 16:19, Hans Dembinski via Boost
On second thought, the Archive concept probably needs an extension to work with json::value. While writing is no problem, reading is a challenge.
From one lens, you're using json::value against json::archive. If
From the second lens, you have a problem, external archive. And here,
It depends on the lens you use. The archive concept overloads primitive types to be serializable. Other serializables are always built on top of the primitives the underlying archive has special code for. that's the case, it has special code here, so fine and everything works as has been described in previous emails. the rest of the rationale you described in your email applies. You do need extensions in a separate concept thou. If you want to control beautiful JSON serialization, you need to expose a few extensions like Bjørn has done in his serialization code. I do disagree about a few things he has done, but we can reach consensus in the future. One thing I want to develop here is integration to Boost.Hana reflection. If you have integration to Boost.Hana Struct concept, it helps a ton. First, creating new serializables will be very easy. Second, you avoid the whole "index/buffer/backtrack/lookahead" problem you guys mentioned. This won't work for push parsers, but pull parsers will benefit a lot.
Bottom line, it is not so easy as I thought.
True. I do would like to discuss this subject, but I need to organize my own thoughts first. It'd be nice if you, Mathias Gaunard, Robert Ramey, and other stakeholders could gather together to explore this topic in detail after Boost.JSON review. -- Vinícius dos Santos Oliveira https://vinipsmaker.github.io/
participants (3)
-
Hans Dembinski
-
Vinnie Falco
-
Vinícius dos Santos Oliveira