On 4 Jul 2014 at 21:59, Dean Michael Berris wrote:
Some would call that the low-level interfacing necessary to achieve bare metal performance.
But, there's no reason you couldn't decompose this and not make it part of the invariants of the objects associated with them. For example, when you create a socket object, can you not do that by acquiring it from some function that internally would determine how it's wired?
I agree this is what should be done in a ground up refactor. But that isn't what's on the table. What is proposed is to standardise a subset of the common practice.
- The fact that it seems the only implementation of the networking parts (and inter-operating bits implied by the interface) would be through a proactor -- how the API was only proactive, no support for just purely reactive applications.
Kinda hard to do true async without.
Can you expand this a little?
http://www.boost.org/doc/libs/1_55_0/doc/html/boost_asio/overview/core/async... http://www.artima.com/articles/io_design_patterns2.html You may of course not meant reactor when you wrote "reactive". For me personally, how QNX (and I assume Hurd) do async i/o is the gold standard. The NT kernel does a very faithful emulation of true async. BSD at least provides a usable aio_* POSIX API. The rest really aren't great on async.
- There's no means for doing "zero-copy" I/O which is a platform-specific but modern way of doing data transfer through existing interfaces.
I don't know what you refer to here. ASIO doesn't copy data being sent or received. It passes through literally what it receives from the OS.
Right.
Does it support direct-memory I/O -- where I write to some memory location instead of scheduling a write? See RDMA, and on Linux vmsplice(...). Can I "donate" the memory I have from user-space to the kernel, which just remaps the memory onto a device's memory buffer? How do I achieve "raw" buffer performance if I have to provide a user-space allocated buffer for data to be written into from the kernel, and then write out data if the data was in a user-space buffer that gets passed to the kernel?
We've moved on to better interfaces in the lower levels of the networking stack, and it would be a shame if we precluded this from being adopted by a standard library specification.
Yeah ... well, actually no, I wouldn't call vmsplice() or any of its ilk as anything deserving the label "better". That whole way of making DMA possible is a hack forced by Linux's networking stack being incapable of using DMA automatically. After all, BSD and Windows manage DMA with socket i/o automatically and transparently. No magic syscalls needed. Linux should up its game here instead of syscall hacks. Even if splice() et all were a good idea to encourage, they aren't standardised by POSIX and therefore are out of scope for standardisation by C++. ISO standards are there to standardise established common practice, not try to design through central committee.
A few that come to mind for potential approaches here are:
- Transport-level abstractions. Considering whether looking at policy-driven transport protocol abstractions (say, an http connection, a raw TCP connection, an SCTP connection with multiple streams, etc.) would be more suitable higher-level abstractions than sockets and write/read buffers.
- Agent-based models. More precisely explicitly having an agent of sorts, or an abstraction of a client/server where work could be delegated, composed with executors and schedulers.
These are all outside the remit of a core C++ networking library.
Why?
ISO standards are there to standardise established common practice, not try to design through central committee. Also, think in terms of baby steps. Start with a good solid low level async networking library which is tightly integrated into threading, executors and the rest of the STL async facilities. That already will be hideously hard to do. For the next major C++ standard build on that with better abstractions.
I say this with all due respect -- while network programming sounds like it's all about sockets, buffers, event loops, and such there are the "boring" bits like addressing, network byte ordering, encoding/decoding algorithms (for strings and blobs of data), framing algorithms, queueing controls, and even back-off algorithms, congestion control, traffic shaping. There's more things to think about too like data structures for efficiently representing frames, headers, network buffers, routing tables, read-write buffer halves, ip address tries, network topologies, protocol encodings (ASN.1, MIME and friends), and a whole host of network-related concepts we're not even touching in the networking discussions.
I'm personally not unsympathetic to this sentiment. However, it would surely need POSIX to move on it first before C++ could.
Why?
Once again: ISO standards are there to standardise established common practice, not try to design through central committee. If all the platforms did something like managing routing interfaces identically, we could argue in favour of it on the case of merit. But they don't, so we can't. One would also be extremely hesitant to standardise anything which hasn't been given full and proper deliberation by the ISO working group responsible for it. I feel little love for how the AWG see the world personally (I find interacting with them deeply frustrating), but they have their role in ISO and they haven't done too bad a job of things in the wider picture. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/