On 25 Sep 2016 at 9:45, Nat Goodspeed wrote:
Asio sucks in the following point:
Its design is incapable to save system calls by doing a batch poll.
? I admit I am not familiar with Asio's internals, but I would think that with multiple I/O requests pending, it could do exactly that. If it doesn't support that already, the io_service design would at least seem to permit it.
It certainly does do exactly this. There are many unfortunate things with the ASIO reactor implementation, but failing to scale to i/o load is not one of them.
Its design doesn't allow the user to specify a timeout, which is essential in suspend_until.
I assume the idiom would be to initiate I/O and also set a timer. Whichever handler is called first, cancel the other.
Exactly correct. AFIO v2 has explicit deadline i/o support in all its APIs, but that's because AFIO v2 was designed last year after v1 failed its peer review here. ASIO was designed back when cancelling i/o didn't work on Windows (XP) so its API does not encourage i/o cancellation. Last time I looked at the Networking TS I saw deadline i/o support in its APIs, so I'm guessing the Networking TS reference implementation has the same.
It lacks of useful things for a coroutine-based system like filesystem operations.
ASIO fully supports the Coroutines TS as provided by recent MSVCs. Just use the future/promise completion model and feed the futures to the await keyword. I know Gor has tested his Coroutines TS with the Networking TS and they worked just fine.
Boost asynchronous filesystem operations are a work in progress: https://ned14.github.io/boost.afio/index.html
An full featured alternative is libuv, however it lacks the
Be aware this library is in a very early alpha state despite the many conference presentations of it. It also currently only compiles on Windows due to some missing POSIX implementation. On 25 Sep 2016 at 19:54, Klemens Morgenstern wrote: direct yielding
hooks, and has a different design especially on stream reading from asio.
libuv scales poorly to i/o load. Rust originally used libuv for a M:N threading and i/o model, they had to abandon it due to poor scalability. The other elephant in the room is that async i/o has very poor performance for fast SSDs (it's more work, and SSDs can push 3.4M 4Kb IOPS nowadays which is just crazy fast, async can't keep up). You're much, much better off using sync i/o. This is why AFIO v2 barely has any async facilities, it's almost entirely synchronous. Yes I know that means it should renamed to Boost.FIO, but I'll cross that bridge later.
Maybe I will get patching into asio. I'm just sending for some commence. I guess you mean comments. :D What I would keep in mind is, that the Networking TS, which will probably move into the C++ Standard at some point. Maybe you should
consider writing a library which uses boost.asio and just built atop it.
Or maybe what you need to do can be done by boost.afio (https://github.com/ned14/boost.afio) . Niall Douglas is working on
The Networking TS has many flaws, but they are well understood flaws and the overall proposal is not bad for standardisation. that
one, he's rather active on the mailing list, so you can surely ask him if he needs help.
Pull requests welcome as I'm on my annual no-coding holiday after CppCon until Christmas. A giant todo list can be found bottom of https://github.com/ned14/boost.afio, any well implemented features following the AFIO idiomatic implementation (no exceptions, no memory allocation, KernelTest unit tested) happily accepted. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/