Oliver Kowalke
2015-09-11 10:48 GMT+02:00 Giovanni Deretta
: The idea is that yield and friends would switch to the idle fiber only when they reach the end of the ready queue.
the idle-fiber executes only function idle() - why not simply execute idle() instead of switch_context(n, this_context); at the end of scheduler::yield()?
what are the reasons that idle() must run on an extra fiber-stack?
As I said elsewhere, there is no fundamental reason and, although I consider the idle fiber a better solution, I would be perfectly fine with a scheduler that doesn't have such a thing and simply called the idle function when appropriate. Note that unconditionally calling idle at the end of yield is not necessarily ideal though, as the idea is that it might execute more expensive operations that you want to do only at the end of an 'epoch' (i.e. when all ready fibers have executed once). Adding a conditional test on yield opens up the possibility of a misprediction. At that point the cost of an additional fiber switch is minimal. The fact is, often there is an existing idle fiber anyway; if you spawn a dedicated thread to run a scheduler, the thread has an implicit fiber which would go otherwise unused; if you are running on top of another scheduler (for example boost::asio::io_service or one of the proposed executors), the idle fiber is simply the context of the underlying scheduler callback; in this case after control reaches back the idle fiber, it is appropriate to return control to the underlying io_service and reschedule another callback (with asio::post for example); Nested scheduler support would fall of naturally and almost transparently on top of this model, together with the ability to temporary override the current thread local scheduler. A nested scheduler idle fiber would appear as just another fiber in the parent scheduler loop. To be clear, the major concerns I have with the current scheduler designs are: - lack of proper cross scheduler wakeup. - unconditional sleep when the scheduler is empty. - the handling of waiting tasks (including the clock wait queue). All three issues are tightly interwoven. Idle tasks, nested schedulers would all be nice to have for me, but not deal breakers. -- gpd