This was one piece of feedback posted during the Boost.JSON review of September 2020:
For the record, I've had offlist email discussions about proposed Boost.JSON with a number of people where the general feeling was that there was no point in submitting a review, as negative review feedback would be ignored, possibly with personal retribution thereafter, and the library was always going to be accepted in any case. So basically it would be wasted effort, and they haven't bothered.
Unless a impassioned on-list reply counts as "personal retribution" I think it is safe to say that the aforementioned retribution never took place. However, the false claim that "the library was always going to be accepted in any case" is really harmful to the reputation of the Boost Formal Review process. As I believe that the review process is a vital piece of social technology that has made the Boost C++ Library Collection the best of breed, I'd like to avoid having the review of the upcoming proposed Boost.URL submission tainted with similar aspersions. Therefore let me state unequivocally, I have no interest in persecuting individuals for criticizing my library submissions. In fact I welcome negative feedback as it affords the opportunity to make the library better - regardless of who is providing the feedback. I am very happy to hear criticisms of my libraries even from those individuals who are actively hostile. However I do have an interest in vigorously opposing bad ideas, such as this one which was tacked on to the end of the message quoted above:
Consider this: a Hana Dusíková type all-constexpr JSON parser could let you specify to the compiler at compile time "this is the exact structure of the JSON that shall be parsed". The compiler then bangs out optimum parse code for that specific JSON structure input. At runtime, the parser tries the pregenerated canned parsers first, if none match, then it falls back to runtime parsing
The totality of the experience gained in developing Boost.JSON suggests that this proposed design is deeply flawed. The bulk of the work in achieving the performance comparable to RapidJSON went not into the parsing but in the allocation and construction of the DOM objects during the parse. This necessitated a profound coupling between parsing and creation of json::value objects. I realize of course that this will invite contradictory replies ("all you need to do is...") but as my conclusion was achieved only after months of experimentation culminating in the production of a complete, working prototype, I would just say: show a working prototype then let's talk. Regards