On Sun, Jun 25, 2017 at 7:12 PM, Niall Douglas via Boost
Most users I should imagine would therefore build scatter-gather lists on the stack as they'll be thrown away immediately, indeed I usually feed it curly braced initializer_lists personally,
Thus imposing limitation on the size of the buffer sequence.
I think you are going to have to back up your claim that memory copying all incoming data is faster rather than being bad implementation techniques with discontinuous storage
I'll let Kazuho back it up for me since I use his ideas: https://github.com/h2o/picohttpparser Here's the slide show explaining the techniques: https://www.slideshare.net/kazuho/h2o-20141103pptx And here is an example of the optimizations possible with linear buffers, which I plan to directly incorporate into Beast in the near future: https://github.com/h2o/picohttpparser/blob/2a16b2365ba30b13c218d15ed99915763... Of course if you think you can do better I would love to see your working parser that operates on multiple discontiguous high quality ring buffered page aligned DMA friendly storage iterators so that I might compare the performance. The good news is that you can do so while leveraging Beast's message model to produce objects that people using Beast already understand. Except that you'll be producing them much, much faster (which is a win-win for everyone).