On Mon, Nov 18, 2019 at 1:14 AM Dominique Devienne via Boost
Regarding those benchmarks, could you please: 1) provide synthetic graphs? 2) better explain what the benchmark does? Those sizes and durations yield very low throughput numbers, so you're obviously doing the parsing several times in a loop, so please adds details on that page, and calculate the real MB/s throughput as well please. Also peak memory would be of interest. 3) Smallest files parsed is ~ 600KB, while in some (important IMHO) use-cases, it's much smaller files of just a few bytes or low-KBs, but lots of them (thousands, millions). In such cases, the constant-overhead of setting up the parser matters and/or instantiating the root value matters, since might dominate over the parsing time. Would it be possible to test that use case too please?
How about this https://vinniefalco.github.io/doc/json/json/benchmarks.html Thanks