Re: [boost] [fiber] Suggestions regarding asio
When considering using multiple threads with an io_service passed to a custom Boost.Fiber scheduler, I keep bumping into the problem that a given fiber scheduler instance must always run on its original thread -- but I know of no way to give an Asio handler "thread affinity." How do we get
It would be better if you used "Reply to All", since I disabled the delivery from the list. processor cycles to the scheduler on the thread whose fibers it is managing? Since we only poll on one master thread, we can easily get affinity. Maybe pseudocode is easier: * We have two class, one for main polling, others waiting for job. * The first one, polls from asio using the same way as the example (poll, or run_one). while(true) { if(!run_one()) stop_other_threads(); // No more work in asio, give up waiting poll(); // flush the queue // Now we should have some fibers in the queue. yield(); // continue execution } * Other threads, uses condvar to wait on the ready queue. awakened(){queue.push_back(f); cv.notify_[one,all](); iosvc.post([]{});} pick_next(){while(!jobs_available){cv.wait();} return job;}
On Thu, Sep 22, 2016 at 11:46 PM, Tatsuyuki Ishi
When considering using multiple threads with an io_service passed to a custom Boost.Fiber scheduler, I keep bumping into the problem that a given fiber scheduler instance must always run on its original thread -- but I know of no way to give an Asio handler "thread affinity." How do we get processor cycles to the scheduler on the thread whose fibers it is managing?
Since we only poll on one master thread, we can easily get affinity.
Maybe pseudocode is easier: * We have two class, one for main polling, others waiting for job. * The first one, polls from asio using the same way as the example (poll, or run_one). while(true) { if(!run_one()) stop_other_threads(); // No more work in asio, give up waiting poll(); // flush the queue // Now we should have some fibers in the queue. yield(); // continue execution }
I think I see. The while loop above is in the lambda posted by boost::fibers::asio::round_robin::service's constructor? Would it still be correct to recast it as follows? while (run_one()) { poll(); // flush the queue // Now we should have some fibers in the queue. yield(); // continue execution } stop_other_threads(); // No more work in asio, give up waiting We might want to post() a trivial lambda to the io_service first because at the time that lamba is entered, the consuming application might or might not already have posted work to the io_service.
* Other threads, uses condvar to wait on the ready queue. awakened(){queue.push_back(f); cv.notify_[one,all](); iosvc.post([]{});} pick_next(){while(!jobs_available){cv.wait();} return job;}
Okay: you're suggesting to share the ready queue between multiple threads. One of them directly interacts with the io_service in question; the others only run ready fibers. I'm nervous about waiting on the condition_variable in pick_next() instead of suspend_until() because I'm concerned about losing the fiber manager's knowledge of when it might next need to run a ready fiber -- due to either sleep or timeout. In fact, as I think about it, we'd probably need to share among participating threads an asio timer and a current earliest time_point. Each participating thread's suspend_until() would check whether the passed time_point is earlier than the current earliest time_point, and if so reset the shared timer. (I think for that we could get away with a direct call into the io_service from the worker thread.) I haven't quite convinced myself yet that that would suffice to wake up asio often enough. It sounds interesting, and probably a useful tactic to add to the examples directory. I don't have time right now to play with the implementation. If you get a chance to make it work before you hear back from me, please post your code. Thank you very much for your suggestion!
Hmm, this is a little off-topic, but I'd like to describe I'm now searching for an alternative and why. Asio sucks in the following point: Its design is incapable to save system calls by doing a batch poll. Its design doesn't allow the user to specify a timeout, which is essential in suspend_until. It lacks of useful things for a coroutine-based system like filesystem operations. An full featured alternative is libuv, however it lacks the direct yielding hooks, and has a different design especially on stream reading from asio. Maybe I will get patching into asio. I'm just sending for some commence.
On Sun, Sep 25, 2016 at 7:17 AM, Tatsuyuki Ishi
Its design is incapable to save system calls by doing a batch poll.
? I admit I am not familiar with Asio's internals, but I would think that with multiple I/O requests pending, it could do exactly that. If it doesn't support that already, the io_service design would at least seem to permit it.
Its design doesn't allow the user to specify a timeout, which is essential in suspend_until.
I assume the idiom would be to initiate I/O and also set a timer. Whichever handler is called first, cancel the other.
It lacks of useful things for a coroutine-based system like filesystem operations.
Boost asynchronous filesystem operations are a work in progress: https://ned14.github.io/boost.afio/index.html
Maybe I will get patching into asio. I'm just sending for some commence.
It might be good to start a new mail thread with a subject line such as: [asio] enhancement suggestions/requests
The attached file is a reference implementation. I haven't really test that. I have noticed some tricky things: The timer queue is not work-steal aware. Maybe it should be managed by the algorithm. Asio's timer can be used for my case as an alternative. Scheduler destruct order fiasco: static or memory-leak or shared_ptr(I found it hard to match my design). The problem is that scheduler is destroyed after main() scope.
On 25 Sep 2016 at 9:45, Nat Goodspeed wrote:
Asio sucks in the following point:
Its design is incapable to save system calls by doing a batch poll.
? I admit I am not familiar with Asio's internals, but I would think that with multiple I/O requests pending, it could do exactly that. If it doesn't support that already, the io_service design would at least seem to permit it.
It certainly does do exactly this. There are many unfortunate things with the ASIO reactor implementation, but failing to scale to i/o load is not one of them.
Its design doesn't allow the user to specify a timeout, which is essential in suspend_until.
I assume the idiom would be to initiate I/O and also set a timer. Whichever handler is called first, cancel the other.
Exactly correct. AFIO v2 has explicit deadline i/o support in all its APIs, but that's because AFIO v2 was designed last year after v1 failed its peer review here. ASIO was designed back when cancelling i/o didn't work on Windows (XP) so its API does not encourage i/o cancellation. Last time I looked at the Networking TS I saw deadline i/o support in its APIs, so I'm guessing the Networking TS reference implementation has the same.
It lacks of useful things for a coroutine-based system like filesystem operations.
ASIO fully supports the Coroutines TS as provided by recent MSVCs. Just use the future/promise completion model and feed the futures to the await keyword. I know Gor has tested his Coroutines TS with the Networking TS and they worked just fine.
Boost asynchronous filesystem operations are a work in progress: https://ned14.github.io/boost.afio/index.html
An full featured alternative is libuv, however it lacks the
Be aware this library is in a very early alpha state despite the many conference presentations of it. It also currently only compiles on Windows due to some missing POSIX implementation. On 25 Sep 2016 at 19:54, Klemens Morgenstern wrote: direct yielding
hooks, and has a different design especially on stream reading from asio.
libuv scales poorly to i/o load. Rust originally used libuv for a M:N threading and i/o model, they had to abandon it due to poor scalability. The other elephant in the room is that async i/o has very poor performance for fast SSDs (it's more work, and SSDs can push 3.4M 4Kb IOPS nowadays which is just crazy fast, async can't keep up). You're much, much better off using sync i/o. This is why AFIO v2 barely has any async facilities, it's almost entirely synchronous. Yes I know that means it should renamed to Boost.FIO, but I'll cross that bridge later.
Maybe I will get patching into asio. I'm just sending for some commence. I guess you mean comments. :D What I would keep in mind is, that the Networking TS, which will probably move into the C++ Standard at some point. Maybe you should
consider writing a library which uses boost.asio and just built atop it.
Or maybe what you need to do can be done by boost.afio (https://github.com/ned14/boost.afio) . Niall Douglas is working on
The Networking TS has many flaws, but they are well understood flaws and the overall proposal is not bad for standardisation. that
one, he's rather active on the mailing list, so you can surely ask him if he needs help.
Pull requests welcome as I'm on my annual no-coding holiday after CppCon until Christmas. A giant todo list can be found bottom of https://github.com/ned14/boost.afio, any well implemented features following the AFIO idiomatic implementation (no exceptions, no memory allocation, KernelTest unit tested) happily accepted. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Am 25.09.2016 um 16:17 schrieb Tatsuyuki Ishi:
Hmm, this is a little off-topic, but I'd like to describe I'm now searching for an alternative and why.
Asio sucks in the following point: Its design is incapable to save system calls by doing a batch poll. What exactly to you mean by that? You can either use a coroutine to achieve exactly that or build a simple class which polls and posts the task on the io_service.
Its design doesn't allow the user to specify a timeout, which is essential in suspend_until. You can use a timer and either cancel the IO-Object or the io_service. It lacks of useful things for a coroutine-based system like filesystem operations. What exactly? Why not use coroutines!?
An full featured alternative is libuv, however it lacks the direct yielding hooks, and has a different design especially on stream reading from asio.
Maybe I will get patching into asio. I'm just sending for some commence. I guess you mean comments. :D What I would keep in mind is, that the Networking TS, which will probably move into the C++ Standard at some point. Maybe you should consider writing a library which uses boost.asio and just built atop it.
Or maybe what you need to do can be done by boost.afio (https://github.com/ned14/boost.afio) . Niall Douglas is working on that one, he's rather active on the mailing list, so you can surely ask him if he needs help.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Sun, Sep 25, 2016 at 10:17 AM, Tatsuyuki Ishi
I'm now searching for an alternative and why.
Asio sucks in the following point: Its design is incapable to save system calls by doing a batch poll. Its design doesn't allow the user to specify a timeout, which is essential in suspend_until. It lacks of useful things for a coroutine-based system like filesystem operations.
A refined iteration on Boost.Asio is poised to become part of the C++ standard library (see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/n4588.pdf). Once this becomes standardized, I would imagine that all other low-level networking implementations for C++ are going to go away over time - after all, why would someone bother with an additional dependency when such functionality is already offered by the standard library? Therefore, if you feel there are deficiencies in either the interface of Boost.Asio, or the implementation of Boost.Asio I urge you to contact the author and the networking working group and present actionable claims. Its not too late to make justifiable interface changes now.
participants (5)
-
Klemens Morgenstern
-
Nat Goodspeed
-
Niall Douglas
-
Tatsuyuki Ishi
-
Vinnie Falco