On Mon, 21 Sep 2020 at 17:49, Vinnie Falco
This looks very much like it depends on the order of appearance of keys in the JSON, which is non-standard.
You'd parse the object until you reach the desired key, building an index of where the various keys you meet on the way are (with possibly extra information of where the associated value starts/end etc.), then for the next one either it's in the index so you can use it then discard it, or it's ahead, so you can continue with the same algorithm. That would be optimized for keys being ordered as expected, ignoring extra keys, while still supporting out-of-order keys (albeit with some extra storage requirements that grows with your nesting level).
In both cases, I'd like to read/write my data from/to JSON with the same framework.
Why? What specifically, is the requirement here?
What I'd like is a way to describe how my C++ types map to a key-value structure with normalized types so that I can easily convert my objects back and forth through a structured self-describing and human-readable interchange format. Ideally I'd like to do so without repeating myself with all kinds of converters, serializers, etc. and reasonably efficiently, which is why I'd like these things to be reasonably unified. Boost.Serialization doesn't do this, since it takes the approach of designing a data format to serialize any C++ structure, rather than mapping a C++ structure to a simpler text-based structure.