2015-09-04 20:10 GMT+02:00 Giovanni Piero Deretta
- Boost.Fiber is yet another library that comes with its own future type. For the sake of interoperability, the author should really contribute changes to boost.thread so that its futures can be re-used.
boost::fibers::future<> has to use internally boost::fibers::mutex instead of std::mutex/boost::mutex (utilizing for instance pthread_mutex) as boost.thread does. boost::fibers::mutex is based on atomics - it does not block the thread - instead the runing fiber is suspended and another fiber will be resumed. a possible future implementation - usable for boost.thread + boost.fiber - must offer to customize the mutex type. futures from boost.thread as well as boost.fiber are allocating futures, e.g. the share-state is allocated on the free-store. I planed to provide non-allocating future as suggested by Tony Van Eerd. Fortunately Niall has already implemented it (boost.spinlock/boost.monad) - no mutex is required. If boost.monad is accepted in boost I'll to integrate it in boost.fiber.
On performance:
- The wait list for boost::fiber::mutex is a deque
. Why not an intrusive linked list of stack allocated nodes? This would remove one or two indirections, a memory allocation and make lock nothrow.
you are right, I'll take this into account
- The performance session lists a yield at about 4000 clock cycles. That seem excessive, considering that the context switch itself should be much less than 100 clock cycles. Where is the overhead coming from?
yes, the context switch itself takes < 100 cycles probably the selection of next ready fiber (look-up) might takes some time additionally - in the tests for the performance the stack allocation is measured too
What's the overhead for an os thread yield?
32 µs
The last issue is particularly important because I can see a lot of spinlocks in the implementation.
the spinlocks are required because the library enables synchronization of fiber running in different threads
With a very fast yield implementation, yielding to the next ready fiber could lead to a more efficient use of resources.
if a fiber A gets suspended (waiting/yielding) the fiber_manager, and thus the scheduling-algorithm, is executed in the context of fiber A. the fiber-manager picks the next fiber B to be resumed and initiates the context switch. do you have specific suggestions?