
On 19 Aug 2014 at 9:37, Robert Ramey wrote:
Such a design would not make good use of hardware DMA such that ...
My view is that a) the above interface would be easy to understand. b) the above interface doesn't preclude any particular implementation and/or optimization at a lower level. The above is really a facade over some implementation.
And, as I have mentioned many times now, such an easy to use upper
fascade is coming with the monadic continuations framework based
around expected
That rules out all serialisation usually :)
The serialization library depends on some stream or stream buffer implementation to do the i/o. It's possible to craft a standard stream buffer such that there would be no extraneous copies as part of the serialization. Basically the process of turning a data structure into a sequence of bytes is orthogonal to the process of actually doing any i/o. I hadn't thought about it, but a asyncio interface similar to the above would be a great complement to the serialization library and would even more popular for this reason.
I really don't think you fully understand what async i/o means to design, but perhaps I can help. What async i/o means to deserialisation is this: there are no current file pointers, and there are no guarantees that data is read consecutively. Indeed, when you read a 1Mb file, you see individual 0.5Kb/4Kb/64Kb sized chunks from *anywhere* in the file extent randomly appear into memory in a random order. These chunks are just as likely to appear at the end as at the beginning or anywhere else, and it is not just expected but known that gaps between consecutive pieces may exist for extended periods of time. Your code now has a choice: you can either block on missing chunks until they appear, which is equal and equivalent to using a memory mapped file - in which case, you should be using a memory mapped file instead and save yourself the hassle. Or you process individual fragments and regions as soon as enough of them appear into memory, and join up those parts as it becomes possible. If you think this through, you realise that your serialisation format must now be very different. Firstly, you need metadata of which regions of the serialised format can be processed individually, and that needs to be loaded completely before all else. Secondly, you must serialise into a recursive descent format, because you are going to be deserialising from the inside outwards - XML is an excellent example of a format well suited to asynchronous i/o based parsing, but almost any format /can/ be stored in a recursive descent format e.g. images. My point is that a linear serialisation format doesn't suit asynchronous i/o, and you shouldn't bother with async i/o with a linear serialisation format as the benefits aren't worth the considerable hassle. Does this make more sense?
Anyway, I just read (some of) the documentation and logged my experience with it. I hope it can useful in some way.
It's very useful. You're helping me understand where other engineers are finding trouble in making sense of me. Please keep going. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/