Boost.Fiber mini-review September 4-13
Hi all, The mini-review of Boost.Fiber by Oliver Kowalke begins today, Friday September 4th, and closes Sunday September 13th. It was reviewed in January 2014; the verdict at that time was "not in its present form." Since then Oliver has substantially improved documentation, performance, library customization and the underlying implementation, and is bringing the library back for mini-review. The substance of the library API remains the same, which is why a mini-review is appropriate. The Fiber library now requires a C++14-conforming compiler. I will monitor reviews and discussion on both the boost-users@ list and the boost@ developers' list. Please include at least "fiber" and "review" in your mail subject, e.g. by replying to this message. (Please reply to only ONE list, however.) Thank you for your interest and your feedback! ----------------------------------------------------- About the library: Boost.Fiber provides a framework for micro-/userland-threads (fibers) scheduled cooperatively. The API contains classes and functions to manage and synchronize fibers similar to Boost.Thread. Each fiber has its own stack. A fiber can save the current execution state, including all registers and CPU flags, the instruction pointer, and the stack pointer and later restore this state. The idea is to have multiple execution paths running on a single thread using a sort of cooperative scheduling (versus threads, which are preemptively scheduled). The running fiber decides explicitly when it should yield to allow another fiber to run (context switching). Boost.Fiber internally uses execution_context from Boost.Context; the classes in this library manage, schedule and, when needed, synchronize those contexts. A context switch between threads usually costs thousands of CPU cycles on x86, compared to a fiber switch with a few hundred cycles. A fiber can only run on a single thread at any point in time. docs: http://olk.github.io/libs/fiber/doc/html/index.html git: https://github.com/olk/boost-fiber --------------------------------------------------- Please always state in your review whether you think the library should be accepted as a Boost library! Additionally please consider giving feedback on the following general topics: - What is your evaluation of the design? - What is your evaluation of the implementation? - What is your evaluation of the documentation? - What is your evaluation of the potential usefulness of the library? - Did you try to use the library? With what compiler? Did you have any problems? - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? - Are you knowledgeable about the problem domain? Nat Goodspeed Boost.Fiber Review Manager ________________________________
On Fri, Sep 4, 2015 at 11:14 AM, Nat Goodspeed
I will monitor reviews and discussion on both the boost-users@ list and the boost@ developers' list.
I will also monitor reviews posted to the Boost Library Incubator: http://rrsd.com/blincubator.com/bi_library/fiber/?gform_post_id=859
On 9/4/2015 12:14 PM, Nat Goodspeed wrote:
Hi all,
The mini-review of Boost.Fiber by Oliver Kowalke begins today, Friday September 4th, and closes Sunday September 13th. It was reviewed in January 2014; the verdict at that time was "not in its present form." Since then Oliver has substantially improved documentation, performance, library customization and the underlying implementation, and is bringing the library back for mini-review.
The substance of the library API remains the same, which is why a mini-review is appropriate.
The Fiber library now requires a C++14-conforming compiler.
I find this decision to be quite unfortunate, given that the last time I checked (although it has been a while) supporting C++11 would be fairly easy. If I recall correctly, the requirement comes from the use of `index_sequence` which can be easily replaced (https://github.com/boostorg/fusion/blob/master/include/boost/fusion/support/...), and the use init-captures in lambdas which can be emulated with a little code gymnastics. Oliver, could you confirm this is still the case? I am not objecting to this decision, which is for Oliver to make. On the contrary, I would like to volunteer to help make Boost.Fiber C++11 compatible if accepted. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
2015-09-04 17:49 GMT+02:00 Agustín K-ballo Bergé
checked (although it has been a while) supporting C++11 would be fairly easy. If I recall correctly, the requirement comes from the use of `index_sequence` which can be easily replaced ( https://github.com/boostorg/fusion/blob/master/include/boost/fusion/support/...), and the use init-captures in lambdas which can be emulated with a little code gymnastics. Oliver, could you confirm this is still the case?
I am not objecting to this decision, which is for Oliver to make. On the contrary, I would like to volunteer to help make Boost.Fiber C++11 compatible if accepted.
boost.fiber itself requires C++14 as well as boost.context on which boost.fiber depends C++11 is missing integer_sequence<> (deferred execution of callable with arguments/parameter packs) and support of move capture (generalized lambda capture) used by both libraries.
On 9/4/2015 1:02 PM, Oliver Kowalke wrote:
2015-09-04 17:49 GMT+02:00 Agustín K-ballo Bergé
: I find this decision to be quite unfortunate, given that the last time I
checked (although it has been a while) supporting C++11 would be fairly easy. If I recall correctly, the requirement comes from the use of `index_sequence` which can be easily replaced ( https://github.com/boostorg/fusion/blob/master/include/boost/fusion/support/...), and the use init-captures in lambdas which can be emulated with a little code gymnastics. Oliver, could you confirm this is still the case?
I am not objecting to this decision, which is for Oliver to make. On the contrary, I would like to volunteer to help make Boost.Fiber C++11 compatible if accepted.
boost.fiber itself requires C++14 as well as boost.context on which boost.fiber depends C++11 is missing integer_sequence<> (deferred execution of callable with arguments/parameter packs) and support of move capture (generalized lambda capture) used by both libraries.
Nod, just like my analysis above says. So would you be willing to take patches for C++11 support? Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On 9/4/2015 2:50 PM, Oliver Kowalke wrote:
2015-09-04 19:28 GMT+02:00 Agustín K-ballo Bergé
: Nod, just like my analysis above says. So would you be willing to take patches for C++11 support?
if the code remains readable, yes
Supporting `index_sequence` is easy, that's a library feature that can be implemented in C++11. The implementation I linked earlier is even better at compile times than some of those shipped by standard library implementations (but not as good as it could be).
I encountered that emulating move capture is not trivial, but maybe you have a nifty solution
Emulating move capture is not that simple, consider this initial implementation: [=,fn=std::forward< Fn >( fn),tpl=std::make_tuple( std::forward< Args >( args) ...)] () mutable -> decltype( auto) { try { BOOST_ASSERT( is_running() ); detail::invoke_helper( std::move( fn), std::move( tpl) ); BOOST_ASSERT( is_running() ); } catch( fiber_interrupted const&) { except_ = std::current_exception(); } catch( ... ) { std::terminate(); } }); A C++11 compatible one would instead do: std::bind([]( typename std::decay<Fn>::type& fn, decltype(std::make_tuple( std::forward< Args >( args)))& tpl ) -> void { try { BOOST_ASSERT( is_running() ); detail::invoke_helper( std::move( fn), std::move( tpl) ); BOOST_ASSERT( is_running() ); } catch( fiber_interrupted const&) { except_ = std::current_exception(); } catch( ... ) { std::terminate(); } }, std::forward< Fn >( fn), std::make_tuple( std::forward< Args >( args) ...)); // I assume there's a `this` involved here as well Incidentally, try to avoid using `auto/decltype(auto)` on function templates (specially when the return type is as simple as `void`), as instantiation is required for them even for innocuous things like using `decltype` on them. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
2015-09-04 19:28 GMT+02:00 Agustín K-ballo Bergé <[hidden email]>:
Nod, just like my analysis above says. So would you be willing to take patches for C++11 support?
if the code remains readable, yes I encountered that emulating move capture is not trivial, but maybe you have a nifty solution
Actually, `fit::capture` works nicely for emulating move capturing: http://fit.readthedocs.org/en/latest/capture/index.html Its not yet in boost, but I am planning to submit it for formal review in the next couple of weeks. Paul -- View this message in context: http://boost.2283326.n4.nabble.com/Boost-Fiber-mini-review-September-4-13-tp... Sent from the Boost - Dev mailing list archive at Nabble.com.
On Fri, Sep 4, 2015 at 4:14 PM, Nat Goodspeed
Hi all,
The mini-review of Boost.Fiber by Oliver Kowalke begins today, Friday September 4th, and closes Sunday September 13th. It was reviewed in January 2014; the verdict at that time was "not in its present form."
Hi Nat and Oliver, I'm interested in reviewing the library. What is exactly under review at this time? Should we treat this as a normal review or only some aspects are under review? I assume that this being a mini review the library was tentatively accepted the last time. -- gpd
Le 04/09/15 17:14, Nat Goodspeed a écrit :
Hi all,
The mini-review of Boost.Fiber by Oliver Kowalke begins today, Friday September 4th, and closes Sunday September 13th. It was reviewed in January 2014; the verdict at that time was "not in its present form." Since then Oliver has substantially improved documentation, performance, library customization and the underlying implementation, and is bringing the library back for mini-review.
Hi Nat, Oliver. Please could you recall us what "not in the present form" meant as a result of the review and what has been done to overcome these issues? Best, Vicente
On Fri, Sep 4, 2015 at 4:14 PM, Nat Goodspeed
Hi all,
The mini-review of Boost.Fiber by Oliver Kowalke begins today,
I did a quick skim of the docs and the implementation. I have to say
that both the docs and the code are quite readable and I can't see
anything controversial.
So, just to get the discussion started, here are a couple of comments:
On the library itself:
- Boost.Fiber is yet another library that comes with its own future
type. For the sake of interoperability, the author should really
contribute changes to boost.thread so that its futures can be re-used.
- In theory Boost.Thread any_condition should be usable out of the box.
This probably should lead to a boost wide discussion. There are a few
boost (or proposed) libraries abstract hardware and OS capabilities,
for example boost.thread, boost.asio, boost.filesystem,
boost.iostream, boost.interproces (which also comes with its own
mutexes and condition variables) and of course the proposed afio and
fiber. At the moment they mostly live in separated, isolated worlds.
It would be nice if the authors were to sit down and work out a shared
design. Or more practically at least added some cross library
interoperability facilities. This is C++, generalization should be
possible, easy and cheap.
On condition variables, should Boost.Fiber add the ability to wait to
any number of them? (you can use a future<> as an event with multi
wait capability of course, but still...).
On performance:
- The wait list for boost::fiber::mutex is a deque
2015-09-04 20:10 GMT+02:00 Giovanni Piero Deretta
- Boost.Fiber is yet another library that comes with its own future type. For the sake of interoperability, the author should really contribute changes to boost.thread so that its futures can be re-used.
boost::fibers::future<> has to use internally boost::fibers::mutex instead of std::mutex/boost::mutex (utilizing for instance pthread_mutex) as boost.thread does. boost::fibers::mutex is based on atomics - it does not block the thread - instead the runing fiber is suspended and another fiber will be resumed. a possible future implementation - usable for boost.thread + boost.fiber - must offer to customize the mutex type. futures from boost.thread as well as boost.fiber are allocating futures, e.g. the share-state is allocated on the free-store. I planed to provide non-allocating future as suggested by Tony Van Eerd. Fortunately Niall has already implemented it (boost.spinlock/boost.monad) - no mutex is required. If boost.monad is accepted in boost I'll to integrate it in boost.fiber.
On performance:
- The wait list for boost::fiber::mutex is a deque
. Why not an intrusive linked list of stack allocated nodes? This would remove one or two indirections, a memory allocation and make lock nothrow.
you are right, I'll take this into account
- The performance session lists a yield at about 4000 clock cycles. That seem excessive, considering that the context switch itself should be much less than 100 clock cycles. Where is the overhead coming from?
yes, the context switch itself takes < 100 cycles probably the selection of next ready fiber (look-up) might takes some time additionally - in the tests for the performance the stack allocation is measured too
What's the overhead for an os thread yield?
32 µs
The last issue is particularly important because I can see a lot of spinlocks in the implementation.
the spinlocks are required because the library enables synchronization of fiber running in different threads
With a very fast yield implementation, yielding to the next ready fiber could lead to a more efficient use of resources.
if a fiber A gets suspended (waiting/yielding) the fiber_manager, and thus the scheduling-algorithm, is executed in the context of fiber A. the fiber-manager picks the next fiber B to be resumed and initiates the context switch. do you have specific suggestions?
On 09/04/2015 08:54 PM, Oliver Kowalke wrote:
2015-09-04 20:10 GMT+02:00 Giovanni Piero Deretta
: - Boost.Fiber is yet another library that comes with its own future type. For the sake of interoperability, the author should really contribute changes to boost.thread so that its futures can be re-used.
boost::fibers::future<> has to use internally boost::fibers::mutex instead of std::mutex/boost::mutex (utilizing for instance pthread_mutex) as boost.thread does. boost::fibers::mutex is based on atomics - it does not block the thread - instead the runing fiber is suspended and another fiber will be resumed. a possible future implementation - usable for boost.thread + boost.fiber - must offer to customize the mutex type. futures from boost.thread as well as boost.fiber are allocating futures, e.g. the share-state is allocated on the free-store. I planed to provide non-allocating future as suggested by Tony Van Eerd. Fortunately Niall has already implemented it (boost.spinlock/boost.monad) - no mutex is required. If boost.monad is accepted in boost I'll to integrate it in boost.fiber.
Could you please elaborate a bit why non-allocating futures wouldn't require a mutex? Or more generally, a execution context suspension/yield mechanism?
2015-09-04 21:14 GMT+02:00 Thomas Heller
require a mutex? Or more generally, a execution context suspension/yield mechanism?
I mean no std::mutex (pthread_mutex) is required. a protocol, utilizing atomics, between future and promise does the job - http://blog.forecode.com/2012/05/23/a-non-allocating-stdfuturepromise/ in 'void promise::set_value(R && r)' boost::this_fiber::yield() could be used, if the future is not ready yet (e.g. AS(future->state, 0, 'R') fails).
On 09/04/2015 09:08 PM, Oliver Kowalke wrote:
2015-09-04 21:14 GMT+02:00 Thomas Heller
: Could you please elaborate a bit why non-allocating futures wouldn't
require a mutex? Or more generally, a execution context suspension/yield mechanism?
I mean no std::mutex (pthread_mutex) is required. a protocol, utilizing atomics, between future and promise does the job - http://blog.forecode.com/2012/05/23/a-non-allocating-stdfuturepromise/ in 'void promise::set_value(R && r)' boost::this_fiber::yield() could be used, if the future is not ready yet (e.g. AS(future->state, 0, 'R') fails).
The problematic portion in std::mutex/boost::mutex is exactly the missing mechanism how to yield in the correct way. Even if you implement a mechanism like in the blog post you mentioned, the problem hasn't been solved.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
2015-09-04 21:39 GMT+02:00 Thomas Heller
The problematic portion in std::mutex/boost::mutex is exactly the missing mechanism how to yield in the correct way. Even if you implement a mechanism like in the blog post you mentioned, the problem hasn't been solved.
why would be boost::this_fiber::yield() inappropriate?
On 09/04/2015 09:51 PM, Oliver Kowalke wrote:
2015-09-04 21:39 GMT+02:00 Thomas Heller
: The problematic portion in std::mutex/boost::mutex is exactly the missing mechanism how to yield in the correct way. Even if you implement a mechanism like in the blog post you mentioned, the problem hasn't been solved.
why would be boost::this_fiber::yield() inappropriate?
It's inappropriate for anything not running in a boost fiber execution context.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
2015-09-04 22:13 GMT+02:00 Thomas Heller
On 09/04/2015 09:51 PM, Oliver Kowalke wrote:
2015-09-04 21:39 GMT+02:00 Thomas Heller
: The problematic portion in std::mutex/boost::mutex is exactly the missing
mechanism how to yield in the correct way. Even if you implement a mechanism like in the blog post you mentioned, the problem hasn't been solved.
why would be boost::this_fiber::yield() inappropriate?
It's inappropriate for anything not running in a boost fiber execution context.
it is save to call boost::this_fiber::yield() from main(), e.g. each thread has a main-context/main-fiber
On 09/04/2015 10:03 PM, Oliver Kowalke wrote:
2015-09-04 22:13 GMT+02:00 Thomas Heller
: On 09/04/2015 09:51 PM, Oliver Kowalke wrote:
2015-09-04 21:39 GMT+02:00 Thomas Heller
: The problematic portion in std::mutex/boost::mutex is exactly the missing
mechanism how to yield in the correct way. Even if you implement a mechanism like in the blog post you mentioned, the problem hasn't been solved.
why would be boost::this_fiber::yield() inappropriate?
It's inappropriate for anything not running in a boost fiber execution context.
it is save to call boost::this_fiber::yield() from main(), e.g. each thread has a main-context/main-fiber
Ok makes sense. So on a non-fiber context this is essentially a noop? Might be a good idea to add this to the documentation. While this helps for allowing applications to transparently work with those non allocating futures no matter what (detoriating to a busy wait loop which might not what everyone expects), what I wanted to get at is that in order for me to consider it a real solution, it has to be adaptable to different executions contexts, boost::this_thread::yield, fiber::this_thread::yield etc... Please note that, while I'd prefer a generic solution to that problem (which currently doesn't exist), I don't think what's described is a show stopper, it just has to be documented appropriately.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Fri, Sep 4, 2015 at 4:33 PM, Thomas Heller
On 09/04/2015 10:03 PM, Oliver Kowalke wrote:
it is safe to call boost::this_fiber::yield() from main(), e.g. each thread has a main-context/main-fiber
Ok makes sense. So on a non-fiber context this is essentially a noop? Might be a good idea to add this to the documentation.
If I may attempt to clarify your feedback: I think you're suggesting that the documentation doesn't yet make clear that this_fiber operations, such as yield(), are valid from main() and from each new thread function, as well as from explicitly-launched fibers. Good point. I'm not sure I understand what you mean by a non-fiber context. In effect, main() is running on the default fiber for the application's main thread. Similarly, at entry to a thread function, code is running on the default fiber for that thread. If you call this_fiber::yield() without yet having launched any other fibers, it's effectively a no-op: control detours through the fiber manager and the scheduler, which concludes that there's only one ready fiber, so the yield() call returns at once. But such a call is indeed meaningful once you launch other fibers. If the documentation were to clarify that yield (et al.) are valid from main() (et al.), would that address your concern, or would you also want to see something like the more verbose paragraph above?
On 09/04/2015 10:52 PM, Nat Goodspeed wrote:
On Fri, Sep 4, 2015 at 4:33 PM, Thomas Heller
wrote: On 09/04/2015 10:03 PM, Oliver Kowalke wrote:
it is safe to call boost::this_fiber::yield() from main(), e.g. each thread has a main-context/main-fiber
Ok makes sense. So on a non-fiber context this is essentially a noop? Might be a good idea to add this to the documentation.
If I may attempt to clarify your feedback:
I think you're suggesting that the documentation doesn't yet make clear that this_fiber operations, such as yield(), are valid from main() and from each new thread function, as well as from explicitly-launched fibers. Good point.
I'm not sure I understand what you mean by a non-fiber context. In effect, main() is running on the default fiber for the application's main thread. Similarly, at entry to a thread function, code is running on the default fiber for that thread. If you call this_fiber::yield() without yet having launched any other fibers, it's effectively a no-op: control detours through the fiber manager and the scheduler, which concludes that there's only one ready fiber, so the yield() call returns at once. But such a call is indeed meaningful once you launch other fibers.
Ahh, that makes sense and is a nice behavior. I wasn't aware of that. What I meant with "fiber execution context" is indeed something that is implemented in boost::fiber::fiber_manager::active_fiber_. However, my assumption was that this tss variable is only valid in task created with boost.fiber (That's what I'm used to). I think at one place or another.
If the documentation were to clarify that yield (et al.) are valid from main() (et al.), would that address your concern, or would you also want to see something like the more verbose paragraph above?
I think the reference documentation would be fine with just a small sentence from where they are valid from. That verbose paragraph is probably useful in some form of exmaple/tutorial. This information is extremely valuable if you want to await busy-wait loops, for example to implement some exponantial backof scheme as the one necessary in the non-allocating futures.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
On Fri, Sep 4, 2015 at 5:32 PM, Thomas Heller
On 09/04/2015 10:52 PM, Nat Goodspeed wrote:
If the documentation were to clarify that yield (et al.) are valid from main() (et al.), would that address your concern, or would you also want to see something like the more verbose paragraph above?
I think the reference documentation would be fine with just a small sentence from where they are valid from.
Okay, so noted.
That verbose paragraph is probably useful in some form of exmaple/tutorial.
Does this section address that point? http://olk.github.io/libs/fiber/doc/html/fiber/integration.html
2015-09-04 22:33 GMT+02:00 Thomas Heller
Ok makes sense. So on a non-fiber context this is essentially a noop?
yes, I'm referring to a fiber-context
Might be a good idea to add this to the documentation.
OK
While this helps for allowing applications to transparently work with those non allocating futures no matter what (detoriating to a busy wait loop which might not what everyone expects), what I wanted to get at is that in order for me to consider it a real solution, it has to be adaptable to different executions contexts, boost::this_thread::yield, fiber::this_thread::yield etc...
adaptable at compile-time or runtime
Please note that, while I'd prefer a generic solution to that problem (which currently doesn't exist), I don't think what's described is a show stopper, it just has to be documented appropriately.
using a non-allocating future seams to be a good solution to me because it could gain in a better performance - Niall has posted some measurements of its implementation on the list
On 4 Sep 2015 8:09 pm, "Oliver Kowalke"
2015-09-04 21:14 GMT+02:00 Thomas Heller
: Could you please elaborate a bit why non-allocating futures wouldn't
require a mutex? Or more generally, a execution context suspension/yield mechanism?
I mean no std::mutex (pthread_mutex) is required. a protocol, utilizing atomics, between future and promise does the job - http://blog.forecode.com/2012/05/23/a-non-allocating-stdfuturepromise/ in 'void promise::set_value(R && r)' boost::this_fiber::yield() could be used, if the future is not ready yet (e.g. AS(future->state, 0, 'R')
fails).
That's not really what I meant though. I believe that a future can be implemented efficiently without any internal mutex or condition variable (which would relegated only to the wait side). A spin lock might be needed to implement a tiny critical session in 'then' or 'wait' for shared_futures, but that's it. Such a future would work well as the underlying implementation for boost::thread::future or boost::fiber::future, which would only need to override the wait implementation and would still allow both futures to interoperate. My suggestion is to convince Vicente to implement such functionality in Boost.Thread. -- gpd
Le 05/09/15 00:32, Giovanni Piero Deretta a écrit :
On 4 Sep 2015 8:09 pm, "Oliver Kowalke"
wrote: 2015-09-04 21:14 GMT+02:00 Thomas Heller
: Could you please elaborate a bit why non-allocating futures wouldn't
require a mutex? Or more generally, a execution context suspension/yield mechanism?
I mean no std::mutex (pthread_mutex) is required. a protocol, utilizing atomics, between future and promise does the job - http://blog.forecode.com/2012/05/23/a-non-allocating-stdfuturepromise/ in 'void promise::set_value(R && r)' boost::this_fiber::yield() could be used, if the future is not ready yet (e.g. AS(future->state, 0, 'R') fails). That's not really what I meant though.
I believe that a future can be implemented efficiently without any internal mutex or condition variable (which would relegated only to the wait side). A spin lock might be needed to implement a tiny critical session in 'then' or 'wait' for shared_futures, but that's it. Such a future would work well as the underlying implementation for boost::thread::future or boost::fiber::future, which would only need to override the wait implementation and would still allow both futures to interoperate.
My suggestion is to convince Vicente to implement such functionality in Boost.Thread.
Hi Giovanni, boost::future has a lot of design issues. I will welcome and base class from which boost::future can inherit as it is quite difficult to maintain it by myself :(. I'm a little bit skeptical about the approach, but who knows. Note that boost::future and boost::thread have some interactions (at thead exit family functions) that need to be taken in account I believe that Adrey Semashev intent with Boost.Sync was more or less that, but I maybe wrong. I don't know where he is with his library. Best, Vicente P.S. to the HPC people on this list. When we could expect to see the work on HPX moved to Boost?
On Fri, Sep 4, 2015 at 11:56 PM, Vicente J. Botet Escriba
Hi Giovanni,
boost::future has a lot of design issues. I will welcome and base class from which boost::future can inherit as it is quite difficult to maintain it by myself :(. I'm a little bit skeptical about the approach, but who knows.
what are your concerns in particular?
Basically what I would like is to decouple the signalling of a future
shared_state, as done by e.g. the promise, from the action to be
taken. The intuition is that most efficient implementations of
synchronization primitives are based on a fast lock free user space
signalling path plus a slow kernel path in case there are waiters. My
idea is to decouple the slow path from the actual signalling
implementation. The decoupling is done via an interface like this:
struct signalable {
virtual void signal() = 0;
atomic
Note that boost::future and boost::thread have some interactions (at thead exit family functions) that need to be taken in account
I imagine that can be tricky. Is it much more complex than the thread holding a pointer to the shared state to be made ready on exit? -- gpd
On Fri, Sep 4, 2015 at 11:56 PM, Vicente J. Botet Escriba
wrote: Hi Giovanni,
boost::future has a lot of design issues. I will welcome and base class from which boost::future can inherit as it is quite difficult to maintain it by myself :(. I'm a little bit skeptical about the approach, but who knows. what are your concerns in particular?
Basically what I would like is to decouple the signalling of a future shared_state, as done by e.g. the promise, from the action to be taken. The intuition is that most efficient implementations of synchronization primitives are based on a fast lock free user space signalling path plus a slow kernel path in case there are waiters. My idea is to decouple the slow path from the actual signalling implementation. The decoupling is done via an interface like this:
struct signalable { virtual void signal() = 0; atomic
next = 0; }; An actual signalable implementation could invoke a continuation (unifying then and wait), signalling a condition variable, an event, a file descriptor, switching to another fiber etc.
The signaler side has something like this:
struct signaler { void signal();
bool try_wait(signalable *); bool try_unwait(signalable*); };
Note this is not virtual. In fact it could simply be a concept. Try_wait attempts to add the signalable to the wait list for the signaler. Failure means that the signaler was signaled in the meantime. try_unwait attempts to remove the signalable from the wait list. Failure also means that the signaler was signaled in the meantime.
The signaler can be implemented lock_free except in the case in which try_unwait is called *and* there are multiple waiters. I believe this case can be handled with a spinlock just for mutual exclusion between unsignalers (try_waiters and signal can still be lock free), but I have yet to implement it
With this interface a future can be implemented without any additional synchronization.
For example the default implementation of 'then' would simply allocate a new shared state for the returned future which inherit from signalable and register it to the shared_state of the previous continuation. The callback is allocated inline in the shared_state. When signal is called, the callback is invoked.
The default wait of course would create a wait object on the stack (or have a thread local one) which also implements the signalable interface (a portable implementation would be based on a condition variable + boolean flat + mutex). A timed wait need to use the try_unwait interface if the clock times out.
Sorry for the rambling, hopefully the idea is a bit clearer.
I have shared my implementation previously. This is the link again:
https://github.com/gpderetta/libtask/blob/master/future.hpp
Note: incomplete, buggy and definitely not production ready. Giovanni, I'm all for adapting Boost.Thread to any global solution to
Le 05/09/15 02:04, Giovanni Piero Deretta a écrit : this problem. What we need first is some one working on it, and showing that we are able to efficiently implement when_all/when_any for some kind of future concept. Then we can try to see how to make boost::future/std::future ...... models of this concept. If you have ideas on how this can work, a POC and time, I propose you to work on it and propose such a library to Boost. I don't believe it is fair to add a link to a ongoing work implementation during the review of a library having a different scope.
Note that boost::future and boost::thread have some interactions (at thead exit family functions) that need to be taken in account
I imagine that can be tricky. Is it much more complex than the thread holding a pointer to the shared state to be made ready on exit?
It is not a question of how complex it can be. It is a matter of public interfaces. We don't have such public interface on the standard and I believe that we need something like that. It is not a good thing that futures and threads be so dependent. The difficulty is defining the interface that makes them (futures/thread) independent without falling to a specific implementation. Thanks for re-rasing these interaction problems, but I believe that these should be handled outside the scope of this review. I suggest you to start a new thread. Vicente
On Sun, Sep 6, 2015 at 1:34 AM, Vicente J. Botet Escriba
Giovanni, I'm all for adapting Boost.Thread to any global solution to this problem. What we need first is some one working on it, and showing that we are able to efficiently implement when_all/when_any for some kind of future concept. Then we can try to see how to make boost::future/std::future ...... models of this concept. If you have ideas on how this can work, a POC and time, I propose you to work on it and propose such a library to Boost. I don't believe it is fair to add a link to a ongoing work implementation during the review of a library having a different scope.
hum, futures are in scope of a library that reimplements the c++11 thread facilities on top of user space threads. My primary concern is that I wouldn't want to see in boost a proliferation of futures all slightly different and incompatible with each other. This being the second review in a row for a library that comes with its own futures, I would say that my concerns are well founded. To be clear, I'm not trying to sneak in a proposal for a library of my own: I have neither enough free time nor the permission of my employer to contribute a library to boost. I was merely suggesting a *possible* implementation of the interoperation protocol. My acceptance would be conditional on the library being interoperable with boost.thread, but not on the implementation. I would prefer that the two futures types were physically the same or at least share part of the implementation though. Also, I'm not trying to get you to do the work of adding the missing functionality to boost::thread::future. That would probably be the responsibility of whoever is proposing a new library, but they would need your approval, as you are the current maintainer of boost.thread. -- gpd
Giovanni, I'm all for adapting Boost.Thread to any global solution to this problem. What we need first is some one working on it, and showing that we are able to efficiently implement when_all/when_any for some kind of future concept. Then we can try to see how to make boost::future/std::future ...... models of this concept. If you have ideas on how this can work, a POC and time, I propose you to work on it and propose such a library to Boost. I don't believe it is fair to add a link to a ongoing work implementation during the review of a library having a different scope. hum, futures are in scope of a library that reimplements the c++11
On Sun, Sep 6, 2015 at 1:34 AM, Vicente J. Botet Escriba
wrote: [snip thread facilities on top of user space threads. My primary concern is that I wouldn't want to see in boost a proliferation of futures all slightly different and incompatible with each other. This being the second review in a row for a library that comes with its own futures, I would say that my concerns are well founded. The problem exist already for any library proposing a standard future implementation. How can this new implementation works with a standard implementation. This is the case for Boost.Thread, HPX, Boost.AFIO, Boost.Fiber, or your own implementation. What I meant is that we need to change the standard to be able to solve
To be clear, I'm not trying to sneak in a proposal for a library of my own: I have neither enough free time nor the permission of my employer to contribute a library to boost. I suspected that this would be the case. We are all looking for free time. I was merely suggesting a *possible* implementation of the interoperation protocol. I'm very interested in finding a solution to this interaction problem and I'll analyze any hint (but I will prefer non intrusive solutions
Le 06/09/15 14:09, Giovanni Piero Deretta a écrit : this problem, as the standard don't take it in account. that avoid forcing a specific implementation).
My acceptance would be conditional on the library being interoperable with boost.thread, but not on the implementation. This is up to you. I don't know what do you want as result of when_all/when_any with different implementations, even more when we have fiber/futures and thread/futures. What will be the semantics of this operation? I would prefer that the two futures types were physically the same or at least share part of the implementation though. This could be done, if a solution to the interaction problem is satisfactory.
We could also share a common implementation between Fiber and Thread even if we don't have a solution to this problem. However we need a volunteer to do it. Some issues to consider: * Boost.Thread supports C++98 compilers, Boost.Fiber doesn't. The common implementation should be as complex as it is the current Boost.Thread implementation, * Major temptation to redesign it with a high risk for regressions in Boost.Thread. I will be ok for a C++14 implementation that a new Boost.Thread version can use however.
Also, I'm not trying to get you to do the work of adding the missing functionality to boost::thread::future. That would probably be the responsibility of whoever is proposing a new library, but they would need your approval, as you are the current maintainer of boost.thread.
Glad to hear that ;-) Best, Vicente
The problem exist already for any library proposing a standard future implementation. How can this new implementation works with a standard implementation. This is the case for Boost.Thread, HPX, Boost.AFIO, Boost.Fiber, or your own implementation. What I meant is that we need to change the standard to be able to solve this problem, as the standard don't take it in account.
I believe the solution to is not so much to change (std::)future, but to
push forward 'await' instead. It already has (most) of a
future-type-agnostic synchronization mechanism specified. Granted, the
proposal will change, but I'm certain it will get into C++ in the C++17
timeframe. The 'await' allows to rewrite when_all, etc. such that it works
for any combination of input futures. Here's a sketch (sans decay, etc.):
namespace foobar
{
template <typename Future...>
requires(is_future<Future>)...
future
Le 06/09/15 15:14, Hartmut Kaiser a écrit :
The problem exist already for any library proposing a standard future implementation. How can this new implementation works with a standard implementation. This is the case for Boost.Thread, HPX, Boost.AFIO, Boost.Fiber, or your own implementation. What I meant is that we need to change the standard to be able to solve this problem, as the standard don't take it in account. I believe the solution to is not so much to change (std::)future, but to push forward 'await' instead. It already has (most) of a future-type-agnostic synchronization mechanism specified. You said (most). What is missing yet? Granted, the proposal will change, but I'm certain it will get into C++ in the C++17 timeframe. The 'await' allows to rewrite when_all, etc. such that it works for any combination of input futures. Here's a sketch (sans decay, etc.):
namespace foobar { template <typename Future...> requires(is_future<Future>)... future
> when_all(Future &&... f) { (await f)...; return make_tuple(std::forward<Future>(f)...); } } Clever and simple, await and variadic expansion allows this simple implementation for boost::wait_all. Does this work already?
This includes already a change to the standard, as when_all must work for any FUTURE like and off course needs await to make it simple Have you also a clever and simple implementation for when_any? Wouldn't it need more changes to the standard? These is what I have in mind when I said "the standard don't take it into account " at least not now.
i.e. every library which exposes its future type provides this trivial implementation of when_all(), etc. It can consume any future from any other place, however. Ok, so the user choose the future result by using a specific qualified overload, isn't it?
auto r = foobar::when_all(....) Even if the implementation seems simple, it is weird to have to do it, but I could live with that. Vicente
Le 06/09/15 15:14, Hartmut Kaiser a écrit :
The problem exist already for any library proposing a standard future implementation. How can this new implementation works with a standard implementation. This is the case for Boost.Thread, HPX, Boost.AFIO, Boost.Fiber, or your own implementation. What I meant is that we need to change the standard to be able to solve this problem, as the standard don't take it in account. I believe the solution to is not so much to change (std::)future, but to push forward 'await' instead. It already has (most) of a future-type-agnostic synchronization mechanism specified. You said (most). What is missing yet?
Await is assumed to return the value represented by the awaitable expression (see below). This would not work for unique futures.
Granted, the proposal will change, but I'm certain it will get into C++ in the C++17 timeframe. The 'await' allows to rewrite when_all, etc. such that it works for any combination of input futures. Here's a sketch (sans decay, etc.):
namespace foobar { template <typename Future...> requires(is_future<Future>)... future
> when_all(Future &&... f) { (await f)...; return make_tuple(std::forward<Future>(f)...); } } Clever and simple, await and variadic expansion allows this simple implementation for boost::wait_all. Does this work already? This includes already a change to the standard, as when_all must work for any FUTURE like and off course needs await to make it simple
Have you also a clever and simple implementation for when_any? Wouldn't it need more changes to the standard? These is what I have in mind when I said "the standard don't take it into account " at least not now.
I understand and I agree, this needs to be solved. Well, there is more, I actually cheated a bit. Await is currently invalidating unique futures, so the above would work for shared_future only. But you got the idea. I'm sure it can be done. Wrt when_any - I don't have an idea how to properly implement that using await at this point. The current proposal has no way of 'cancelling' await, iirc - at least not without setting an exception.
i.e. every library which exposes its future type provides this trivial implementation of when_all(), etc. It can consume any future from any other place, however. Ok, so the user choose the future result by using a specific qualified overload, isn't it?
auto r = foobar::when_all(....)
Even if the implementation seems simple, it is weird to have to do it, but I could live with that.
I don't see any other way. Even if we find a way to customize std::future (or its shared state) it will still have to represent (probably type-erased) a specific implementation type (synchronization-wise) in each specific context, which has to be specified in some way. Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu
Le 06/09/15 15:14, Hartmut Kaiser a écrit :
The problem exist already for any library proposing a standard future implementation. How can this new implementation works with a standard implementation. This is the case for Boost.Thread, HPX, Boost.AFIO, Boost.Fiber, or your own implementation. What I meant is that we need to change the standard to be able to solve this problem, as the standard don't take it in account. I believe the solution to is not so much to change (std::)future, but to push forward 'await' instead. It already has (most) of a future-type-agnostic synchronization mechanism specified. You said (most). What is missing yet? Await is assumed to return the value represented by the awaitable expression (see below). This would not work for unique futures. This was exactly what I understood.
Granted, the proposal will change, but I'm certain it will get into C++ in the C++17 timeframe. The 'await' allows to rewrite when_all, etc. such that it works for any combination of input futures. Here's a sketch (sans decay, etc.): namespace foobar { template <typename Future...> requires(is_future<Future>)... future
> when_all(Future &&... f) { (await f)...; return make_tuple(std::forward<Future>(f)...); } } Clever and simple, await and variadic expansion allows this simple implementation for boost::wait_all. Does this work already? This includes already a change to the standard, as when_all must work for any FUTURE like and off course needs await to make it simple
Have you also a clever and simple implementation for when_any? Wouldn't it need more changes to the standard? These is what I have in mind when I said "the standard don't take it into account " at least not now. I understand and I agree, this needs to be solved.
Well, there is more, I actually cheated a bit. Await is currently invalidating unique futures, so the above would work for shared_future only. But you got the idea. I'm sure it can be done. I'm sure we will reach to do it. My concern was that we can not reject a
Le 06/09/15 17:20, Hartmut Kaiser a écrit : library because it doesn't solves this still unresolved issue.
Wrt when_any - I don't have an idea how to properly implement that using await at this point. The current proposal has no way of 'cancelling' await, iirc - at least not without setting an exception.
i.e. every library which exposes its future type provides this trivial implementation of when_all(), etc. It can consume any future from any other place, however. Ok, so the user choose the future result by using a specific qualified overload, isn't it?
auto r = foobar::when_all(....)
Even if the implementation seems simple, it is weird to have to do it, but I could live with that. I don't see any other way. Even if we find a way to customize std::future (or its shared state) it will still have to represent (probably type-erased) a specific implementation type (synchronization-wise) in each specific context, which has to be specified in some way.
Other alternatives could be:
Having an overload that has the FUTURE type constructor .
namespace std // *1
{
template
Hartmut Kaiser wrote:
template <typename Future...> requires(is_future<Future>)... future
> when_all(Future &&... f) { (await f)...; return make_tuple(std::forward<Future>(f)...); }
I've always found when_any much more interesting than when_all. Is it as trivial to implement with await as when_all?
template <typename Future...> requires(is_future<Future>)... future
> when_all(Future &&... f) { (await f)...; return make_tuple(std::forward<Future>(f)...); } I've always found when_any much more interesting than when_all. Is it as trivial to implement with await as when_all?
Nod, I agree. However, I have not found a clever way to do that yet. Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu
On 9/6/2015 1:35 PM, Peter Dimov wrote:
Hartmut Kaiser wrote:
template <typename Future...> requires(is_future<Future>)... future
> when_all(Future &&... f) { (await f)...; return make_tuple(std::forward<Future>(f)...); } I've always found when_any much more interesting than when_all. Is it as trivial to implement with await as when_all?
The await proposal already deals with heterogeneous futures (awaitable types). It comes with traits and customization points on top of which it is built. Using `await_suspend` one can attach a callback to an awaitable object that fires when the future becomes ready (without consuming it). An heterogeneous `when_all` would be built on top of `await_suspend`, plain `await` is optimal when one just want the semantic of `when_all`. Similarly, an heterogeneous `when_any` would use `await_suspend` to attach a callback to all awaitable objects, and the first callback to run would make the resulting future ready (potentially but not necessarily canceling all other callbacks). Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On Sun, Sep 6, 2015 at 7:22 PM, Agustín K-ballo Bergé
On 9/6/2015 1:35 PM, Peter Dimov wrote:
Hartmut Kaiser wrote:
template <typename Future...> requires(is_future<Future>)... future
> when_all(Future &&... f) { (await f)...; return make_tuple(std::forward<Future>(f)...); } I've always found when_any much more interesting than when_all. Is it as trivial to implement with await as when_all?
The await proposal already deals with heterogeneous futures (awaitable types). It comes with traits and customization points on top of which it is built. Using `await_suspend` one can attach a callback to an awaitable object that fires when the future becomes ready (without consuming it). An heterogeneous `when_all` would be built on top of `await_suspend`, plain `await` is optimal when one just want the semantic of `when_all`. Similarly, an heterogeneous `when_any` would use `await_suspend` to attach a callback to all awaitable objects, and the first callback to run would make the resulting future ready (potentially but not necessarily canceling all other callbacks).
In the last coroutine proposal (is it N4499). It seems that await_suspend(h) is guaranteed to work only if 'h' is of type coroutine_handle<P>. Also, at least from the example implementation in N4286, await_suspend is not really more powerful than 'then'. In fact the previous when_all implementation and the when_any you've described can be implemented on top of 'then'. The problematic scenario is performing wait_any multiple times on the same future set as the attached continuations will continue growing every time. BTW, it is sort of possible (but quite inefficient) to attach a continuation to an unique future without consuming it: template<class T> future<void> when_ready(std::future<T>& x) { auto shared_x = x.share(); x = shared_x.then([](auto&&x) { return x.get(); } return x.then([](auto&&) {}); } future<T> x = ...; await when_ready(x); // or when_ready(x).then([] { /* do something */ }); -- gpd
On 9/6/2015 6:26 PM, Giovanni Piero Deretta wrote:
On Sun, Sep 6, 2015 at 7:22 PM, Agustín K-ballo Bergé
wrote: On 9/6/2015 1:35 PM, Peter Dimov wrote:
Hartmut Kaiser wrote:
template <typename Future...> requires(is_future<Future>)... future
> when_all(Future &&... f) { (await f)...; return make_tuple(std::forward<Future>(f)...); } I've always found when_any much more interesting than when_all. Is it as trivial to implement with await as when_all?
The await proposal already deals with heterogeneous futures (awaitable types). It comes with traits and customization points on top of which it is built. Using `await_suspend` one can attach a callback to an awaitable object that fires when the future becomes ready (without consuming it). An heterogeneous `when_all` would be built on top of `await_suspend`, plain `await` is optimal when one just want the semantic of `when_all`. Similarly, an heterogeneous `when_any` would use `await_suspend` to attach a callback to all awaitable objects, and the first callback to run would make the resulting future ready (potentially but not necessarily canceling all other callbacks).
In the last coroutine proposal (is it N4499). It seems that await_suspend(h) is guaranteed to work only if 'h' is of type coroutine_handle<P>.
I don't quite see this, nor the opposite. Luckily the proposal is still in flux (and I seriously hope it goes into a TS first).
Also, at least from the example implementation in N4286, await_suspend is not really more powerful than 'then'.
In fact the previous when_all implementation and the when_any you've described can be implemented on top of 'then'.
I thought the concern was that `then` does a bit too much, it consumes the future which would be problematic for `when_any` returning futures that cannot re-obtain until all input futures are ready.
The problematic scenario is performing wait_any multiple times on the same future set as the attached continuations will continue growing every time.
I have not consider this problematic, each call to `wait_any` will cause one memory allocation and it will only release it once all input futures are done. I have not done any specific experimentation on this topic, I might reach the same conclusions as you once I do. My prior experience with `wait_any` includes dealing with hundreds or thousands of futures, going back and "unwaiting" each of them has not ever been a viable approach for me.
BTW, it is sort of possible (but quite inefficient) to attach a continuation to an unique future without consuming it:
template<class T> future<void> when_ready(std::future<T>& x) { auto shared_x = x.share(); x = shared_x.then([](auto&&x) { return x.get(); }
No, this ^ introduces a copyable requirement.
return x.then([](auto&&) {}); }
future<T> x = ...;
await when_ready(x); // or when_ready(x).then([] { /* do something */ });
Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
2015-09-07 9:30 GMT+08:00 Agustín K-ballo Bergé
On 9/6/2015 6:26 PM, Giovanni Piero Deretta wrote:
BTW, it is sort of possible (but quite inefficient) to attach a continuation to an unique future without consuming it:
template<class T> future<void> when_ready(std::future<T>& x) { auto shared_x = x.share(); x = shared_x.then([](auto&&x) { return x.get(); }
No, this ^ introduces a copyable requirement.
Assuming the standard provides await_ready/await_suspend/await_resume as customization points as suggested by Eric, there's a simple and universal solution, : ```c++ template<class Task> struct ready_awaiter { Task task; bool await_ready() { return std::await_ready(task); } template<class F> auto await_suspend(F&& f) { return std::await_suspend(task, std::forward<F>(f)); } void await_resume() const noexcept {} }; template<class Task> inline ready_awaiter<Task> ready(Task&& task) { return {std::forward<Task>(task)}; } ``` By ignoring `await_resume`, `await ready(x)` won't consume `x`. Unfortunately, these customization points aren't available to the user in the current proposal, while I did implement this and it's called `awaken` in my emulation library: https://github.com/jamboree/co2/blob/master/include/co2/coroutine.hpp#L584
2015-09-05 0:32 GMT+02:00 Giovanni Piero Deretta
I believe that a future can be implemented efficiently without any internal mutex or condition variable (which would relegated only to the wait side). A spin lock might be needed to implement a tiny critical session in 'then' or 'wait' for shared_futures, but that's it.
that's what the non-allocating future does
My suggestion is to convince Vicente to implement such functionality in Boost.Thread.
but Niall has already done the job in boost.monad
On Fri, Sep 4, 2015 at 7:54 PM, Oliver Kowalke
2015-09-04 20:10 GMT+02:00 Giovanni Piero Deretta
: - Boost.Fiber is yet another library that comes with its own future type. For the sake of interoperability, the author should really contribute changes to boost.thread so that its futures can be re-used.
boost::fibers::future<> has to use internally boost::fibers::mutex instead of std::mutex/boost::mutex (utilizing for instance pthread_mutex) as boost.thread does. boost::fibers::mutex is based on atomics - it does not block the thread - instead the runing fiber is suspended and another fiber will be resumed. a possible future implementation - usable for boost.thread + boost.fiber - must offer to customize the mutex type. futures from boost.thread as well as boost.fiber are allocating futures, e.g. the share-state is allocated on the free-store. I planed to provide non-allocating future as suggested by Tony Van Eerd. Fortunately Niall has already implemented it (boost.spinlock/boost.monad) - no mutex is required. If boost.monad is accepted in boost I'll to integrate it in boost.fiber.
So, I do not want a non-allocating future, as I think it is actually counter-productive. I only want a way to combine boost::thread::future and boost::fiber::future (i.e. in a call to wait_all/wait_any). There are two ways to do that: 1) either a simple protocol that allows efficient future interoperation (note that efficient is key here, otherwise 'then' could also work) between distinct futures. or 2) boost::fiber::future is simply a tiny wrapper over boost::thread::future that overrides the wait policy. In the second case of course boost.thread must allow specifying a wait policy and must not use mutexes internally (it should either have a lock free implementation or use spin locks). [...]
- The performance session lists a yield at about 4000 clock cycles. That seem excessive, considering that the context switch itself should be much less than 100 clock cycles. Where is the overhead coming from?
yes, the context switch itself takes < 100 cycles probably the selection of next ready fiber (look-up) might takes some time additionally - in the tests for the performance the stack allocation is measured too
Hum, right, the test is not just measuring the performance of yield. Do you have/can you write a benchmark that simply measures the yield between N futures, over a few thousand iterations? Anyway, if we subtract the create + join cost from the benchmark, the cost is still in the 2us range. Shouldn't the next fiber selection be just simple list pop front when there are runnable fibers (i.e. no work stealing is required)?
What's the overhead for an os thread yield?
32 µs
So boost.fiber is about an order of magnitude. It is good, but was hoping for more.
The last issue is particularly important because I can see a lot of spinlocks in the implementation.
the spinlocks are required because the library enables synchronization of fiber running in different threads
With a very fast yield implementation, yielding to the next ready fiber could lead to a more efficient use of resources.
if a fiber A gets suspended (waiting/yielding) the fiber_manager, and thus the scheduling-algorithm, is executed in the context of fiber A. the fiber-manager picks the next fiber B to be resumed and initiates the context switch. do you have specific suggestions?
Please ignore my last two comments. I only meant to say that spinning was wasteful and you should yield to the next fiber. But that's actually the case in the current spinlock implementation, I should have looked more carefully. Btw, why spinlock is in details? It could be useful to expose it. -- gpd
2015-09-05 0:53 GMT+02:00 Giovanni Piero Deretta
So, I do not want a non-allocating future, as I think it is actually counter-productive. I only want a way to combine boost::thread::future and boost::fiber::future (i.e. in a call to wait_all/wait_any). There are two ways to do that:
1) either a simple protocol that allows efficient future interoperation (note that efficient is key here, otherwise 'then' could also work) between distinct futures.
hmmm ...
or
2) boost::fiber::future is simply a tiny wrapper over boost::thread::future that overrides the wait policy.
the futures of boost.thread and boost.fiber differ efectivly only in the type of mutex and condition_variable (implementing the suspend/resume mechanism for threads/fibers) a base implementation of future has to take the types of mutex and condition_variable as template arg. template< typename T, typename Mutex, typename Condition > class base_future<> { }; template< typename T > using future = base_future< T, boost::mutex, boost::condition_variable >; // boost.thread template< typename T > using future = base_future< T, boost::fibers::mutex, boost::fibers::condition_variable >; // boost.fiber
Shouldn't the next fiber selection be just simple list pop front when there are runnable fibers (i.e. no work stealing is required)?
yes
So boost.fiber is about an order of magnitude. It is good, but was hoping for more.
std::thread was tested
Btw, why spinlock is in details? It could be useful to expose it.
It is an implementation detail and Niall has mentioned its boost.spinlock library (1-2 years ago) and probably I could replace my spinlock implementation by a official one in the future.
Le 05/09/15 04:38, Oliver Kowalke a écrit :
2015-09-05 0:53 GMT+02:00 Giovanni Piero Deretta
: or
2) boost::fiber::future is simply a tiny wrapper over boost::thread::future that overrides the wait policy.
the futures of boost.thread and boost.fiber differ efectivly only in the type of mutex and condition_variable (implementing the suspend/resume mechanism for threads/fibers)
a base implementation of future has to take the types of mutex and condition_variable as template arg.
template< typename T, typename Mutex, typename Condition > class base_future<> { };
template< typename T > using future = base_future< T, boost::mutex, boost::condition_variable >; // boost.thread
template< typename T > using future = base_future< T, boost::fibers::mutex, boost::fibers::condition_variable >; // boost.fiber
While this can be useful to have a common implementation, I don't think it solves the issue if interaction with different implementation of some concept Future. Vicente
Le 05/09/15 00:53, Giovanni Piero Deretta a écrit :
On Fri, Sep 4, 2015 at 7:54 PM, Oliver Kowalke
wrote: 2015-09-04 20:10 GMT+02:00 Giovanni Piero Deretta
: - Boost.Fiber is yet another library that comes with its own future type. For the sake of interoperability, the author should really contribute changes to boost.thread so that its futures can be re-used.
boost::fibers::future<> has to use internally boost::fibers::mutex instead of std::mutex/boost::mutex (utilizing for instance pthread_mutex) as boost.thread does. boost::fibers::mutex is based on atomics - it does not block the thread - instead the runing fiber is suspended and another fiber will be resumed. a possible future implementation - usable for boost.thread + boost.fiber - must offer to customize the mutex type. futures from boost.thread as well as boost.fiber are allocating futures, e.g. the share-state is allocated on the free-store. I planed to provide non-allocating future as suggested by Tony Van Eerd. Fortunately Niall has already implemented it (boost.spinlock/boost.monad) - no mutex is required. If boost.monad is accepted in boost I'll to integrate it in boost.fiber.
So, I do not want a non-allocating future, as I think it is actually counter-productive. I only want a way to combine boost::thread::future and boost::fiber::future (i.e. in a call to wait_all/wait_any). There are two ways to do that:
1) either a simple protocol that allows efficient future interoperation (note that efficient is key here, otherwise 'then' could also work) between distinct futures.
or
2) boost::fiber::future is simply a tiny wrapper over boost::thread::future that overrides the wait policy.
In the second case of course boost.thread must allow specifying a wait policy and must not use mutexes internally (it should either have a lock free implementation or use spin locks).
[...]
I've not a solution to this problem and as other I'm trying to find it. I don't think the solution to been able to work with different future with when_all/when_any come from making all of them deriving from a base class. We need a common interface/protocol for all of them of course, but we have yet to decide what is the type of then resulting future. While the subject is very important, I believe that it is out of the scope of the Fiber library review. Best, Vicente
2015-09-06 1:37 GMT+02:00 Vicente J. Botet Escriba : I've not a solution to this problem and as other I'm trying to find it.
I don't think the solution to been able to work with different future with
when_all/when_any come from making all of them deriving from a base class.
We need a common interface/protocol for all of them of course, but we have
yet to decide what is the type of then resulting future. While the subject is very important, I believe that it is out of the scope
of the Fiber library review. I agree
On 9/4/2015 12:14 PM, Nat Goodspeed wrote:
Hi all,
The mini-review of Boost.Fiber by Oliver Kowalke begins today, Friday September 4th, and closes Sunday September 13th. It was reviewed in January 2014; the verdict at that time was "not in its present form." Since then Oliver has substantially improved documentation, performance, library customization and the underlying implementation, and is bringing the library back for mini-review.
I just had a quick look at the future implementation, to see if proper chrono support (one of the reasons for my previous rejection) was implemented. While `wait_for` depicts the standard signature, it does not seem `wait_until` does. There's also a great deal of code duplication for `T`, `T&` and `void` while only a handful of functions differ. This is not a sign of good design, and will make maintenance difficult. The shared state has a few functions that *must* be called while holding a lock, like `mark_ready_and_notify_`. It'd be better to make that explicitly, and preferably have the type system enforce it, as currently proving the correctness of those functions require tracing all the callers. Uses of `condition::wait` and friends manually loop, instead of leveraging the predicate based overload for readability. Finally, `std::move` the algorithm lives in `<algorithm>`, while `std::move` the utility lives in `<utililty>`. That's all for now. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On Fri, Sep 4, 2015 at 2:24 PM, Agustín K-ballo Bergé
I just had a quick look at the future implementation, to see if proper chrono support (one of the reasons for my previous rejection) was implemented. While `wait_for` depicts the standard signature, it does not seem `wait_until` does.
I want to make it as easy as possible for Oliver to act on people's feedback. Please suggest a specific change?
The shared state has a few functions that *must* be called while holding a lock, like `mark_ready_and_notify_`. It'd be better to make that explicitly, and preferably have the type system enforce it, as currently proving the correctness of those functions require tracing all the callers.
Is your suggestion that Fiber's use of its shared_state track more closely the way Boost.Thread uses its own shared_state?
On 9/4/2015 3:46 PM, Nat Goodspeed wrote:
On Fri, Sep 4, 2015 at 2:24 PM, Agustín K-ballo Bergé
wrote: I just had a quick look at the future implementation, to see if proper chrono support (one of the reasons for my previous rejection) was implemented. While `wait_for` depicts the standard signature, it does not seem `wait_until` does.
I want to make it as easy as possible for Oliver to act on people's feedback. Please suggest a specific change?
That would be: "- Every API involving time point or duration should accept arbitrary clock types, immediately converting to a canonical duration type for internal use." For further reference: http://talesofcpp.fusionfenix.com/post-15/rant-on-the-templated-nature-of-st...
The shared state has a few functions that *must* be called while holding a lock, like `mark_ready_and_notify_`. It'd be better to make that explicitly, and preferably have the type system enforce it, as currently proving the correctness of those functions require tracing all the callers.
Is your suggestion that Fiber's use of its shared_state track more closely the way Boost.Thread uses its own shared_state?
I am not familiar with the way Boost.Thread uses its own shared_state. My concern is that functions like `mark_ready_and_notify_` have certain lock requirements that currently are implicit: void mark_ready_and_notify_() { // is there a race here?? ready_ = true; waiters_.notify_all(); } To prove the correctness of those functions, one has to look up all calls to `mark_ready_and_notify_`, all calls to those calls, and so on. To make the code readable, I would expect lock requirements to be stated explicitly, for instance: void mark_ready_and_notify_(std::unique_lock< mutex >& lk) { ready_ = true; waiters_.notify_all(); } Although holding a mutex while firing on the condition variable is considered bad practice, so this would be even better: void mark_ready_and_notify_(std::unique_lock< mutex >&& lk) { ready_ = true; lk.unlock(); waiters_.notify_all(); } Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
2015-09-04 21:05 GMT+02:00 Agustín K-ballo Bergé
On 9/4/2015 3:46 PM, Nat Goodspeed wrote:
On Fri, Sep 4, 2015 at 2:24 PM, Agustín K-ballo Bergé
wrote: I just had a quick look at the future implementation, to see if proper
chrono support (one of the reasons for my previous rejection) was implemented. While `wait_for` depicts the standard signature, it does not seem `wait_until` does.
I want to make it as easy as possible for Oliver to act on people's feedback. Please suggest a specific change?
That would be:
"- Every API involving time point or duration should accept arbitrary clock types, immediately converting to a canonical duration type for internal use."
future<> has an overload for wait_until() template< typename ClockType > future_status wait_until( typename ClockType::time_point const&) const;
On 9/4/2015 4:13 PM, Oliver Kowalke wrote:
2015-09-04 21:05 GMT+02:00 Agustín K-ballo Bergé
: On 9/4/2015 3:46 PM, Nat Goodspeed wrote:
On Fri, Sep 4, 2015 at 2:24 PM, Agustín K-ballo Bergé
wrote: I just had a quick look at the future implementation, to see if proper
chrono support (one of the reasons for my previous rejection) was implemented. While `wait_for` depicts the standard signature, it does not seem `wait_until` does.
I want to make it as easy as possible for Oliver to act on people's feedback. Please suggest a specific change?
That would be:
"- Every API involving time point or duration should accept arbitrary clock types, immediately converting to a canonical duration type for internal use."
future<> has an overload for wait_until()
template< typename ClockType > future_status wait_until( typename ClockType::time_point const&) const;
Here is the standard definition of `future::wait_until`:
template
2015-09-04 21:35 GMT+02:00 Agustín K-ballo Bergé
Here is the standard definition of `future::wait_until`:
template
future_status wait_until(chrono::time_point const& abs_time) const; Note how it takes *any* time point, and automatically deduces template arguments.
I use this pattern in the other classes too (for instance condition::wait_until())
On 9/4/2015 5:04 PM, Oliver Kowalke wrote:
2015-09-04 21:35 GMT+02:00 Agustín K-ballo Bergé
: Here is the standard definition of `future::wait_until`:
template
future_status wait_until(chrono::time_point const& abs_time) const; Note how it takes *any* time point, and automatically deduces template arguments.
I use this pattern in the other classes too (for instance condition::wait_until())
Is there anything particular about `future` and `shared_future` that prevents you from doing it correctly for them too? Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
2015-09-04 22:18 GMT+02:00 Agustín K-ballo Bergé
On 9/4/2015 5:04 PM, Oliver Kowalke wrote:
2015-09-04 21:35 GMT+02:00 Agustín K-ballo Bergé
: Here is the standard definition of `future::wait_until`:
template
future_status wait_until(chrono::time_point const& abs_time) const; Note how it takes *any* time point, and automatically deduces template arguments.
I use this pattern in the other classes too (for instance condition::wait_until())
Is there anything particular about `future` and `shared_future` that prevents you from doing it correctly for them too?
no, it is fixed in branch develop - I missed to fix it as I changed the chrono related code in the other classes
On 9/4/2015 11:12 PM, Oliver Kowalke wrote:
2015-09-04 22:18 GMT+02:00 Agustín K-ballo Bergé
: On 9/4/2015 5:04 PM, Oliver Kowalke wrote:
2015-09-04 21:35 GMT+02:00 Agustín K-ballo Bergé
: Here is the standard definition of `future::wait_until`:
template
future_status wait_until(chrono::time_point const& abs_time) const; Note how it takes *any* time point, and automatically deduces template arguments.
I use this pattern in the other classes too (for instance condition::wait_until())
Is there anything particular about `future` and `shared_future` that prevents you from doing it correctly for them too?
no, it is fixed in branch develop - I missed to fix it as I changed the chrono related code in the other classes
That's good to know. Moving on, I tried to peek at the implementation to see if you meet the standard required timing specifications (30.2.4 Timing specifications [thread.req.timing]). You don't seem to meet any of them. I cannot go into detail at this time, but I'll try to summarize it in my final review. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
2015-09-04 21:05 GMT+02:00 Agustín K-ballo Bergé
Although holding a mutex while firing on the condition variable is considered bad practice, so this would be even better:
do you have a reference? in Butenhof's examples pthread_cond_signal/pthread_cond_broadcast are always called if front of pthread_mutex_unlock
On 9/5/2015 3:42 AM, Oliver Kowalke wrote:
2015-09-04 21:05 GMT+02:00 Agustín K-ballo Bergé
: Although holding a mutex while firing on the condition variable is considered bad practice, so this would be even better:
do you have a reference?
Well, it's not exactly new and it is baked into the design of `std::condition_variable`. I guess this will have to do for reference: http://en.cppreference.com/w/cpp/thread/condition_variable/notify_one
in Butenhof's examples pthread_cond_signal/pthread_cond_broadcast are always called if front of pthread_mutex_unlock
Alas pthread specifies different semantics than the standard library, and there you are actually expected to hold the lock if you want predictable scheduling. I hear pthread won't actually wake up any threads then (which wouldn't be able to make progress otherwise), but rather switch them from waiting on the cv to waiting on the mutex to avoid useless context switches; when the mutex is finally unlocked the thread will finally wake up. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
2015-09-05 15:12 GMT+02:00 Agustín K-ballo Bergé
`std::condition_variable`. I guess this will have to do for reference: http://en.cppreference.com/w/cpp/thread/condition_variable/notify_one
OK ,that's what I've had read yesterday too
in Butenhof's examples pthread_cond_signal/pthread_cond_broadcast are
always called if front of pthread_mutex_unlock
Alas pthread specifies different semantics than the standard library, and there you are actually expected to hold the lock if you want predictable scheduling. I hear pthread won't actually wake up any threads then (which wouldn't be able to make progress otherwise), but rather switch them from waiting on the cv to waiting on the mutex to avoid useless context switches; when the mutex is finally unlocked the thread will finally wake up.
hmm - Anthony does also the unlock after the notification in the examples of 'C++ Concurrency in Action' - anyway I've already changed the implementation accordingly in branch develop.
Oliver Kowalke wrote:
hmm - Anthony does also the unlock after the notification in the examples of 'C++ Concurrency in Action'
Books deliberately don't tell you to notify without holding the lock, because many people infer from that that you can notify without taking the lock at all, which is wrong and leads to subtle bugs.
Agustín K-ballo Bergé wrote:
Alas pthread specifies different semantics than the standard library, and there you are actually expected to hold the lock if you want predictable scheduling.
I don't think that's true. The ability to notify without holding the lock comes from pthreads. It's not a C++ invention, it's a pthread invention; you're most definitely not expected to hold the lock. It's also more efficient because otherwise the awakened thread would immediately block on the mutex.
I hear pthread won't actually wake up any threads then (which wouldn't be able to make progress otherwise), but rather switch them from waiting on the cv to waiting on the mutex to avoid useless context switches; when the mutex is finally unlocked the thread will finally wake up.
Some pthreads implementations supposedly have this optimization, but not all; and if you notify without holding the mutex, the thread would proceed immediately.
On 9/5/2015 10:47 AM, Peter Dimov wrote:
Agustín K-ballo Bergé wrote:
Alas pthread specifies different semantics than the standard library, and there you are actually expected to hold the lock if you want predictable scheduling.
I don't think that's true. The ability to notify without holding the lock comes from pthreads. It's not a C++ invention, it's a pthread invention; you're most definitely not expected to hold the lock.
Agreed, but on the other hand it appears pthreads gives special semantics to notifications while the lock is hold. The reference talks about "predictable scheduling".
It's also more efficient because otherwise the awakened thread would immediately block on the mutex.
Nod, this is explained in the article linked before.
I hear pthread won't actually wake up any threads then (which wouldn't be able to make progress otherwise), but rather switch them from waiting on the cv to waiting on the mutex to avoid useless context switches; when the mutex is finally unlocked the thread will finally wake up.
Some pthreads implementations supposedly have this optimization, but not all; and if you notify without holding the mutex, the thread would proceed immediately.
Your guess is as good as mine. I'm not a pthread standard expert, and that last comment was just hearsay on my part. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On Sat, Sep 5, 2015 at 4:08 PM, Agustín K-ballo Bergé
On 9/5/2015 10:47 AM, Peter Dimov wrote:
Agustín K-ballo Bergé wrote:
Alas pthread specifies different semantics than the standard library, and there you are actually expected to hold the lock if you want predictable scheduling.
I don't think that's true. The ability to notify without holding the lock comes from pthreads. It's not a C++ invention, it's a pthread invention; you're most definitely not expected to hold the lock.
Agreed, but on the other hand it appears pthreads gives special semantics to notifications while the lock is hold. The reference talks about "predictable scheduling".
It is not really about special semantics, but it is a consequence of other POSIX guarantees. Waiters are waken up in FIFO order (at least under realtime FIFO scheduling), but if the signaling is done outside the critical section, a late thread might acquire the critical section (and consume a resource) after the condition as become true, but before older waiters had a chance to acquire it. This might be particularly important with realtime systems.
It's also more efficient because otherwise the awakened thread would immediately block on the mutex.
Nod, this is explained in the article linked before.
I hear pthread won't actually wake up any threads then (which wouldn't be able to make progress otherwise), but rather switch them from waiting on the cv to waiting on the mutex to avoid useless context switches; when the mutex is finally unlocked the thread will finally wake up.
Some pthreads implementations supposedly have this optimization, but not all; and if you notify without holding the mutex, the thread would proceed immediately.
Your guess is as good as mine. I'm not a pthread standard expert, and that last comment was just hearsay on my part.
IIRC, at least glibc doesn't perform this optimization anymore. For more details on this topic, you can search for 'wait morphing'. HTH, -- gpd
Waiters are waken up in FIFO order (at least under realtime FIFO scheduling), but if the signaling is done outside
Giovanni Piero Deretta wrote: the critical section, a late thread might acquire the critical section (and consume a resource) after the condition as become true, but before older waiters had a chance to acquire it. This might be particularly important with realtime systems. That's typically what one wants for performance though (absent special FIFO requirements), as that late thread is running (has a CPU) and has its cache hot.
On Sun, Sep 6, 2015 at 9:12 PM, Peter Dimov
Giovanni Piero Deretta wrote:
Waiters are waken up in FIFO order (at least under realtime FIFO scheduling), but if the signaling is done outside the critical section, a late thread might acquire the critical section (and consume a resource) after the condition as become true, but before older waiters had a chance to acquire it. This might be particularly important with realtime systems.
That's typically what one wants for performance though (absent special FIFO requirements), as that late thread is running (has a CPU) and has its cache hot.
Yes of course, on a normal application, for performance normally you want that, but on a realtime system the constraints are different. For example the late thread might have lower priority than the currently blocked threads. -- gpd
2015-09-05 15:12 GMT+02:00 Agustín K-ballo Bergé
On 9/5/2015 3:42 AM, Oliver Kowalke wrote:
2015-09-04 21:05 GMT+02:00 Agustín K-ballo Bergé
: Although holding a mutex while firing on the condition variable is
considered bad practice, so this would be even better:
do you have a reference?
Well, it's not exactly new and it is baked into the design of `std::condition_variable`. I guess this will have to do for reference: http://en.cppreference.com/w/cpp/thread/condition_variable/notify_one
in Butenhof's examples pthread_cond_signal/pthread_cond_broadcast are
always called if front of pthread_mutex_unlock
Alas pthread specifies different semantics than the standard library, and there you are actually expected to hold the lock if you want predictable scheduling. I hear pthread won't actually wake up any threads then (which wouldn't be able to make progress otherwise), but rather switch them from waiting on the cv to waiting on the mutex to avoid useless context switches; when the mutex is finally unlocked the thread will finally wake up.
keep in mind that fibers do not run in parallel - in a single thread the sequence of unique_lock< mutex > lk( mtx); ... lk.unlock(); cnd.notify_one(); is the same as unique_lock< mutex > lk( mtx); ... cnd.notify_one(); lk.unlock(); (no parallelism like threads)
Am 05.09.2015 3:59 nachm. schrieb "Oliver Kowalke" : 2015-09-05 15:12 GMT+02:00 Agustín K-ballo Bergé On 9/5/2015 3:42 AM, Oliver Kowalke wrote: 2015-09-04 21:05 GMT+02:00 Agustín K-ballo Bergé Although holding a mutex while firing on the condition variable is considered bad practice, so this would be even better: do you have a reference? Well, it's not exactly new and it is baked into the design of
`std::condition_variable`. I guess this will have to do for reference:
http://en.cppreference.com/w/cpp/thread/condition_variable/notify_one in Butenhof's examples pthread_cond_signal/pthread_cond_broadcast are always called if front of pthread_mutex_unlock Alas pthread specifies different semantics than the standard library,
and
there you are actually expected to hold the lock if you want predictable
scheduling. I hear pthread won't actually wake up any threads then
(which
wouldn't be able to make progress otherwise), but rather switch them
from
waiting on the cv to waiting on the mutex to avoid useless context
switches; when the mutex is finally unlocked the thread will finally
wake
up. keep in mind that fibers do not run in parallel - in a single thread the
sequence of unique_lock< mutex > lk( mtx);
...
lk.unlock();
cnd.notify_one(); is the same as unique_lock< mutex > lk( mtx);
...
cnd.notify_one();
lk.unlock(); (no parallelism like threads) What about two fibers running on different OS threads? Are they not allowed
to synchronize with each other? _______________________________________________
Unsubscribe & other changes:
On Sat, Sep 5, 2015 at 10:03 AM, Thomas Heller
What about two fibers running on different OS threads? Are they not allowed to synchronize with each other?
http://olk.github.io/libs/fiber/doc/html/fiber/overview.html#fiber.overview....
Am 05.09.2015 4:06 nachm. schrieb "Nat Goodspeed"
On Sat, Sep 5, 2015 at 10:03 AM, Thomas Heller
wrote:
What about two fibers running on different OS threads? Are they not
allowed
to synchronize with each other?
http://olk.github.io/libs/fiber/doc/html/fiber/overview.html#fiber.overview.... Thanks, that's what I was hoping for. I was just a little confused about Oliver's previous explanation. Furthermore I think it's correct to state that fibers always run concurrently (please note the subtile difference between parallelism and concurrency).
_______________________________________________ Unsubscribe & other changes:
2015-09-05 16:28 GMT+02:00 Thomas Heller
Thanks, that's what I was hoping for. I was just a little confused about Oliver's previous explanation. Furthermore I think it's correct to state that fibers always run concurrently (please note the subtile difference between parallelism and concurrency).
sorry - I just tried to show that fibers might behave different than threads (e.g. fiber which get signaled via a condition-variable do not immediately run -> deferred to the next cooperative supension) anyway the code is fixed accordingly
Le 05/09/15 16:03, Thomas Heller a écrit :
Am 05.09.2015 3:59 nachm. schrieb "Oliver Kowalke"
:
2015-09-05 15:12 GMT+02:00 Agustín K-ballo Bergé
: On 9/5/2015 3:42 AM, Oliver Kowalke wrote:
2015-09-04 21:05 GMT+02:00 Agustín K-ballo Bergé
considered bad practice, so this would be even better:
do you have a reference?
Well, it's not exactly new and it is baked into the design of `std::condition_variable`. I guess this will have to do for reference: http://en.cppreference.com/w/cpp/thread/condition_variable/notify_one
in Butenhof's examples pthread_cond_signal/pthread_cond_broadcast are
always called if front of pthread_mutex_unlock
Alas pthread specifies different semantics than the standard library, and there you are actually expected to hold the lock if you want predictable scheduling. I hear pthread won't actually wake up any threads then (which wouldn't be able to make progress otherwise), but rather switch them from waiting on the cv to waiting on the mutex to avoid useless context switches; when the mutex is finally unlocked the thread will finally wake up.
keep in mind that fibers do not run in parallel - in a single thread the sequence of
unique_lock< mutex > lk( mtx); ... lk.unlock(); cnd.notify_one();
is the same as
unique_lock< mutex > lk( mtx); ... cnd.notify_one(); lk.unlock();
(no parallelism like threads) What about two fibers running on different OS threads? Are they not allowed to synchronize with each other?
I expect that the answer is yes, otherwise I don't know what we are reviewing. How Boost.Fiber can optimize the synchronization of two fibers on the same threads is another issue. So you are right, fibers should be able to run in parallel. Vicente
2015-09-06 2:06 GMT+02:00 Vicente J. Botet Escriba : to synchronize with each other? I expect that the answer is yes, otherwise I don't know what we are
reviewing.
How Boost.Fiber can optimize the synchronization of two fibers on the same What about two fibers running on different OS threads? Are they not allowed
threads is another issue.
So you are right, fibers should be able to run in parallel. boost.fiber supports this - see
http://olk.github.io/libs/fiber/doc/html/fiber/rationale.html#fiber.rational...
Le 05/09/15 08:42, Oliver Kowalke a écrit :
2015-09-04 21:05 GMT+02:00 Agustín K-ballo Bergé
: Although holding a mutex while firing on the condition variable is considered bad practice, so this would be even better:
do you have a reference?
in Butenhof's examples pthread_cond_signal/pthread_cond_broadcast are always called if front of pthread_mutex_unlock
Hi, I don't know if it is bad or good practice. In any case we don't need to maintain the lock while notifying. Vicente
2015-09-06 2:12 GMT+02:00 Vicente J. Botet Escriba : I don't know if it is bad or good practice. In any case we don't need to
maintain the lock while notifying. I've already modified the sources accordingly - please note that this
pattern (unlock before signaling) is not
necessary for fibers running in the same thread (because both run
sequential, not in parallel as threads do)
2015-09-04 20:24 GMT+02:00 Agustín K-ballo Bergé
while only a handful of functions differ. This is not a sign of good design, and will make maintenance difficult.
could you point to an exact source line, please?
Finally, `std::move` the algorithm lives in `<algorithm>`, while `std::move` the utility lives in `<utililty>`.
hmm, probably mistaken - anyway I'll fix it
On 9/4/2015 4:00 PM, Oliver Kowalke wrote:
2015-09-04 20:24 GMT+02:00 Agustín K-ballo Bergé
: There's also a great deal of code duplication for `T`, `T&` and `void`
while only a handful of functions differ. This is not a sign of good design, and will make maintenance difficult.
could you point to an exact source line, please?
Certainly, that would be: - All of `future` and `shared_future` https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/fu... - All of `shared_state` https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/de... - All of `promise` https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/pr... - All of `task_object` https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/de... For each of them, there is only one function that varies amongst specializations. Moving common code to a helper base class will make the code readable, easier to maintain, and reduce the bloating. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
2015-09-04 21:09 GMT+02:00 Agustín K-ballo Bergé
- All of `future` and `shared_future` https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/fu...
- All of `shared_state` https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/de...
- All of `promise` https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/pr...
- All of `task_object` https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/de...
For each of them, there is only one function that varies amongst specializations. Moving common code to a helper base class will make the code readable, easier to maintain, and reduce the bloating.
OK, you refer to template specializations of future, promise etc. I prefer to prevent additional indirections introduced by deriving from base classes
On 9/4/2015 4:19 PM, Oliver Kowalke wrote:
2015-09-04 21:09 GMT+02:00 Agustín K-ballo Bergé
: - All of `future` and `shared_future` https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/fu...
- All of `shared_state` https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/de...
- All of `promise` https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/pr...
- All of `task_object` https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/de...
For each of them, there is only one function that varies amongst specializations. Moving common code to a helper base class will make the code readable, easier to maintain, and reduce the bloating.
OK, you refer to template specializations of future, promise etc. I prefer to prevent additional indirections introduced by deriving from base classes
What additional indirections would deriving from a base class introduce? Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
2015-09-04 21:36 GMT+02:00 Agustín K-ballo Bergé
What additional indirections would deriving from a base class introduce?
depends on the inheritance type - private inheritance from helper == 'implemented-n-terms-of': functions of helper are not part of the public interface (future/promise has to invoke helper functions internally) - public inheritance from helper == 'is-a': functions of helper are part of the public interface, but I don't like to track a future as the same as a promise (or packaged_task) => void foo( helper &) could be called for future and promise so I believe using template specializationonly is reasonable
On 9/4/2015 11:10 PM, Oliver Kowalke wrote:
2015-09-04 21:36 GMT+02:00 Agustín K-ballo Bergé
: What additional indirections would deriving from a base class introduce?
depends on the inheritance type
- private inheritance from helper == 'implemented-n-terms-of': functions of helper are not part of the public interface (future/promise has to invoke helper functions internally)
That's not entirely truthful: struct base { void foo(); void bar(); }; struct derived : private base { using base::bar; // bar is part of the public interface now }; derived d; d.foo(); // error, foo is private d.bar(); // fine
- public inheritance from helper == 'is-a': functions of helper are part of the public interface, but I don't like to track a future as the same as a promise (or packaged_task) => void foo( helper &) could be called for future and promise
I fail to see how this is relevant. You'd have `future_base`, `promise_base`, etc. There is no "tracking" issue in sight.
so I believe using template specializationonly is reasonable
No, sorry, that's just unreasonable; it goes against some pretty fundamental design principles. I must have failed to explain myself correctly, please don't hesitate to keep asking if I am unclear. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On 9/4/2015 11:10 PM, Oliver Kowalke wrote:
2015-09-04 21:36 GMT+02:00 Agustín K-ballo Bergé
: What additional indirections would deriving from a base class introduce?
depends on the inheritance type
- private inheritance from helper == 'implemented-n-terms-of': functions of helper are not part of the public interface (future/promise has to invoke helper functions internally)
That's not entirely truthful:
struct base { void foo(); void bar(); };
struct derived : private base { using base::bar; // bar is part of the public interface now };
derived d; d.foo(); // error, foo is private d.bar(); // fine
- public inheritance from helper == 'is-a': functions of helper are part of the public interface, but I don't like to track a future as the same as a promise (or packaged_task) I don't think Agustin is asking for this kind of inheritance. I believe
Le 05/09/15 06:15, Agustín K-ballo Bergé a écrit :
that it is talking to the common part of future<T>, future
=> void foo( helper &) could be called for future and promise
I fail to see how this is relevant. You'd have `future_base`, `promise_base`, etc. There is no "tracking" issue in sight.
so I believe using template specializationonly is reasonable
No, sorry, that's just unreasonable; it goes against some pretty fundamental design principles. I must have failed to explain myself correctly, please don't hesitate to keep asking if I am unclear.
Vicente
Am 04.09.2015 5:15 nachm. schrieb "Nat Goodspeed"
Hi all,
The mini-review of Boost.Fiber by Oliver Kowalke begins today, Friday September 4th, and closes Sunday September 13th. It was reviewed in January 2014; the verdict at that time was "not in its present form." Since then Oliver has substantially improved documentation, performance, library customization and the underlying implementation, and is bringing the library back for mini-review.
In the performance section, were all tests executed on a single thread? Regarding the different stack allocators, do they have any noticeable impact on performance?
On 9/4/2015 12:14 PM, Nat Goodspeed wrote:
Hi all,
The mini-review of Boost.Fiber by Oliver Kowalke begins today, Friday September 4th, and closes Sunday September 13th. It was reviewed in January 2014; the verdict at that time was "not in its present form." Since then Oliver has substantially improved documentation, performance, library customization and the underlying implementation, and is bringing the library back for mini-review.
The substance of the library API remains the same, which is why a mini-review is appropriate.
I had a look at `condition_variable[_any]` documentation, and was surprised to see that those two classes are guaranteed to be just aliases to a single implementation. The destructor documentation states: "All fibers waiting on *this have been notified by a call to notify_one or notify_all (though the respective calls to wait, wait_for or wait_until need not have returned)." However, the implementation tries to grab a lock on a mutex member after the fiber has been resumed: https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/condition... If the destructor starts after waiters have been notified but before the respective calls to `wait[_xxx]` have returned, then this access would potentially be undefined behavior. Is there anything specific about the way fibers are implemented that guarantee that a call to `notify[_all]` can't return until all calls to `wait[_xxx]` have returned? Or any other implementation detail that guarantees this to be well defined? Have you had a look at N2406 [http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2406.html]? It has a lot of valuable information on this subject. Also, I would like to suggest you make the name `condition` an implementation detail, even if a single implementation meets the requirements of both cvs. As the documentation states, this name has been long deprecated in Boost.Thread. Finally, I believe someone already mentioned that the use of a `queue` while waiting is inefficient, so I won't go into detail. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On 9/7/2015 12:52 PM, Agustín K-ballo Bergé wrote:
However, the implementation tries to grab a lock on a mutex member after the fiber has been resumed: https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/condition...
Sorry for the noise, I have just realized after hitting send that `lt` and `lk` actually have different spellings. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On 9/7/2015 12:52 PM, Agustín K-ballo Bergé wrote:
On 9/4/2015 12:14 PM, Nat Goodspeed wrote:
Hi all,
The mini-review of Boost.Fiber by Oliver Kowalke begins today, Friday September 4th, and closes Sunday September 13th. It was reviewed in January 2014; the verdict at that time was "not in its present form." Since then Oliver has substantially improved documentation, performance, library customization and the underlying implementation, and is bringing the library back for mini-review.
The substance of the library API remains the same, which is why a mini-review is appropriate.
I had a look at `condition_variable[_any]` documentation, and was surprised to see that those two classes are guaranteed to be just aliases to a single implementation.
A bit more on the subject, the documentation for `wait()` states: "Precondition: lk is locked by the current fiber, and either no other fiber is currently waiting on *this, or the execution of the mutex() member function on the lk objects supplied in the calls to wait in all the fibers currently waiting on *this would return the same value as lk->mutex() for this call to wait." This seems to come directly from the standard, where it applies only to `condition_variable` and not `condition_variable_any`. Not only `condition_variable_any` does not have that precondition, it doesn't even require `lk->mutex()` to be well-formed. Your implementation doesn't seem to have those requirements, since (I assume) it works fine for `condition_variable_any`. "Note: The Precondition is a bit dense. It merely states that all the fibers calling wait on *this must wait on lk objects governing the same mutex." Fine, sans **currently**... "Three distinct objects are involved in any condition_variable::wait() call: the condition_variable itself, the mutex coordinating access between fibers and a lock object (e.g. std::unique_lock)." I wouldn't call a "call" an "object", but fine... "In some sense it would be nice if the condition_variable's constructor could accept the related mutex object, enforcing agreement across all wait() calls; but the existing APIs prevent that." One does not necessarily need to use the same mutex with a single condition variable over and over again, the requirement applies to concurrent waiters only (and again only for `condition_variable`). "Instead we must require the wait() call to accept a reference to the local lock object." Indeed you must, since the user will have to do some checks under the lock before calling `wait()`, and those and the wait must happen atomically (or else risks forever sleeping). "It is an error to reuse a given condition_variable instance with lock objects that reference different underlying mutex objects. It would be like a road intersection with traffic lights independent of one another: sooner or later a collision will result. " This states a different *stronger* requirement than the precondition, so which is it? Long story short, as far as I understand `condition_variable` allows for a more efficient implementation where the lock used to protect the variable is also used to protect the condition internal structures, while `condition_variable_any` needs an internal lock of its own. But I never quite fully understood their differences, and I wasn't present for the discussion, so perhaps someone else can enlighten us. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On Mon, Sep 7, 2015 at 9:18 PM, Agustín K-ballo Bergé
"It is an error to reuse a given condition_variable instance with lock objects that reference different underlying mutex objects. It would be like a road intersection with traffic lights independent of one another: sooner or later a collision will result. "
This states a different *stronger* requirement than the precondition, so which is it?
I'll confess to writing that non-normative Note, my attempt to unpack the Precondition. You are correct that the Note overlooks the case in which the condition variable becomes idle and is then reused by a consistent set of waiters on an entirely different mutex. The Precondition states the actual requirement.
On 9/4/2015 12:14 PM, Nat Goodspeed wrote:
docs: http://olk.github.io/libs/fiber/doc/html/index.html git: https://github.com/olk/boost-fiber
Which branch are we supposed to be reviewing? I assumed the _master_ branch, but I notice some code I looked at on Friday has changed since then. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
2015-09-07 18:11 GMT+02:00 Agustín K-ballo Bergé
On 9/4/2015 12:14 PM, Nat Goodspeed wrote:
docs: http://olk.github.io/libs/fiber/doc/html/index.html git: https://github.com/olk/boost-fiber
Which branch are we supposed to be reviewing? I assumed the _master_ branch, but I notice some code I looked at on Friday has changed since then.
oh - sorry my mistake, I incorporate requested changes in branch develop. I inadvertently merged branch master
2015-09-07 18:19 GMT+02:00 Oliver Kowalke
2015-09-07 18:11 GMT+02:00 Agustín K-ballo Bergé
: On 9/4/2015 12:14 PM, Nat Goodspeed wrote:
docs: http://olk.github.io/libs/fiber/doc/html/index.html git: https://github.com/olk/boost-fiber
Which branch are we supposed to be reviewing? I assumed the _master_ branch, but I notice some code I looked at on Friday has changed since then.
oh - sorry my mistake, I incorporate requested changes in branch develop.
== I modified the code regarding to std::chrono as you have suggested (branch develop)
On 9/7/2015 1:21 PM, Oliver Kowalke wrote:
2015-09-07 18:19 GMT+02:00 Oliver Kowalke
: 2015-09-07 18:11 GMT+02:00 Agustín K-ballo Bergé
: On 9/4/2015 12:14 PM, Nat Goodspeed wrote:
docs: http://olk.github.io/libs/fiber/doc/html/index.html git: https://github.com/olk/boost-fiber
Which branch are we supposed to be reviewing? I assumed the _master_ branch, but I notice some code I looked at on Friday has changed since then.
oh - sorry my mistake, I incorporate requested changes in branch develop.
== I modified the code regarding to std::chrono as you have suggested (branch develop)
You have just merged develop to master yet again, and the code I was currently trying to understand changed yet again (something got renamed I think?). Please **please** keep a stable branch for review, it doesn't need to be _master_. Changes to address feedback are welcomed, but please don't merge those to the branch under review as those are pretty disruptive. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
2015-09-07 18:31 GMT+02:00 Agustín K-ballo Bergé
On 9/7/2015 1:21 PM, Oliver Kowalke wrote:You have just merged develop to master yet again, and the code I was currently trying to understand changed yet again (something got renamed I think?). Please **please** keep a stable branch for review, it doesn't need to be _master_. Changes to address feedback are welcomed, but please don't merge those to the branch under review as those are pretty disruptive.
umm - sorry, I was in the false window (the merge should be done for boost.context). I could revert the changes on master if you like.
2015-09-07 20:46 GMT+02:00 Oliver Kowalke
2015-09-07 18:31 GMT+02:00 Agustín K-ballo Bergé
: On 9/7/2015 1:21 PM, Oliver Kowalke wrote:You have just merged develop to master yet again, and the code I was currently trying to understand changed yet again (something got renamed I think?). Please **please** keep a stable branch for review, it doesn't need to be _master_. Changes to address feedback are welcomed, but please don't merge those to the branch under review as those are pretty disruptive.
umm - sorry, I was in the false window (the merge should be done for boost.context). I could revert the changes on master if you like.
would be a reset + a force push
On 7 Sep 2015 7:56 pm, "Oliver Kowalke"
2015-09-07 20:46 GMT+02:00 Oliver Kowalke
: 2015-09-07 18:31 GMT+02:00 Agustín K-ballo Bergé
On 9/7/2015 1:21 PM, Oliver Kowalke wrote:You have just merged develop
to
master yet again, and the code I was currently trying to understand changed yet again (something got renamed I think?). Please **please** keep a stable branch for review, it doesn't need to be _master_. Changes to address feedback are welcomed, but please don't merge those to the branch under review as those are pretty disruptive.
umm - sorry, I was in the false window (the merge should be done for boost.context). I could revert the changes on master if you like.
would be a reset + a force push
Maybe it would be better to tag the review version and just post the tag as a replay to the original announcement. -- gpd
On 9/7/2015 4:00 PM, Giovanni Piero Deretta wrote:
On 7 Sep 2015 7:56 pm, "Oliver Kowalke"
wrote: 2015-09-07 20:46 GMT+02:00 Oliver Kowalke
: 2015-09-07 18:31 GMT+02:00 Agustín K-ballo Bergé
On 9/7/2015 1:21 PM, Oliver Kowalke wrote:You have just merged develop
to
master yet again, and the code I was currently trying to understand changed yet again (something got renamed I think?). Please **please** keep a stable branch for review, it doesn't need to be _master_. Changes to address feedback are welcomed, but please don't merge those to the branch under review as those are pretty disruptive.
umm - sorry, I was in the false window (the merge should be done for boost.context). I could revert the changes on master if you like.
would be a reset + a force push
Maybe it would be better to tag the review version and just post the tag as a replay to the original announcement.
+1 This would do. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On 9/7/2015 4:29 PM, Oliver Kowalke wrote:
tag v1.0 on branch master
I don't see a tag v1.0, have you pushed it? Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On 9/4/2015 12:14 PM, Nat Goodspeed wrote:
Hi all,
The mini-review of Boost.Fiber by Oliver Kowalke begins today, Friday September 4th, and closes Sunday September 13th. It was reviewed in January 2014; the verdict at that time was "not in its present form." Since then Oliver has substantially improved documentation, performance, library customization and the underlying implementation, and is bringing the library back for mini-review.
The substance of the library API remains the same, which is why a mini-review is appropriate.
`fiber::async` signature is still wrong. Note this was a C++11 defect: http://cplusplus.github.io/LWG/lwg-defects.html#2021 Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On Mon, Sep 7, 2015 at 4:28 PM, Agustín K-ballo Bergé
`fiber::async` signature is still wrong. Note this was a C++11 defect:
So to be clear, I think you are requesting this instead? template< typename Fn, typename ... Args > future< typename std::result_of< typename std::decay<Fn>::type(typename std::decay<Args>::type... ) >::type > async( Fn && fn, Args && ... args) { ... } (making the corresponding future<> type change in both overloads of course)
On 9/7/2015 5:42 PM, Nat Goodspeed wrote:
On Mon, Sep 7, 2015 at 4:28 PM, Agustín K-ballo Bergé
wrote: `fiber::async` signature is still wrong. Note this was a C++11 defect:
So to be clear, I think you are requesting this instead?
template< typename Fn, typename ... Args > future< typename std::result_of< typename std::decay<Fn>::type(typename std::decay<Args>::type... ) >::type > async( Fn && fn, Args && ... args) { ... }
(making the corresponding future<> type change in both overloads of course)
That's the signature fix, yes. It appears the implementation already decay-copies the arguments correctly. But since you had me double check: - The result paragraph reads "future< typename result_of< Fn >::type > representing the shared state associated with the asynchronous execution of fn.", that's wrong too. - The implementation of `invoke_helper` looks weird: https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/detail/in.... `std::get< I >( std::forward< Tpl >( tpl) )` should suffice. - The implementation of `async` moves a local future while returning it. This is an anti-pattern, since it prevents elision. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On 9/7/2015 5:58 PM, Agustín K-ballo Bergé wrote:
On 9/7/2015 5:42 PM, Nat Goodspeed wrote:
On Mon, Sep 7, 2015 at 4:28 PM, Agustín K-ballo Bergé
wrote: `fiber::async` signature is still wrong. Note this was a C++11 defect:
So to be clear, I think you are requesting this instead?
template< typename Fn, typename ... Args > future< typename std::result_of< typename std::decay<Fn>::type(typename std::decay<Args>::type... ) >::type > async( Fn && fn, Args && ... args) { ... }
(making the corresponding future<> type change in both overloads of course)
That's the signature fix, yes. It appears the implementation already decay-copies the arguments correctly.
Just to be extra explicit, although I think it should be obvious, the following lines would have to be changed to follow suit: https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/as... https://github.com/olk/boost-fiber/blob/master/include/boost/fiber/future/as... Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On 9/4/2015 12:14 PM, Nat Goodspeed wrote:
Hi all,
The mini-review of Boost.Fiber by Oliver Kowalke begins today, Friday September 4th, and closes Sunday September 13th. It was reviewed in January 2014; the verdict at that time was "not in its present form." Since then Oliver has substantially improved documentation, performance, library customization and the underlying implementation, and is bringing the library back for mini-review.
The substance of the library API remains the same, which is why a mini-review is appropriate.
The section "Integrating Fibers with Asynchronous Callbacks" contains the following snippet: boost::fibers::promise< AsyncAPI::errorcode > promise; boost::fibers::future< AsyncAPI::errorcode > future( promise.get_future() ); // We can confidently bind a reference to local variable 'promise' into // the lambda callback because we know for a fact we're going to suspend // (preserving the lifespan of both 'promise' and 'future') until the // callback has fired. api.init_write( data, [&promise]( AsyncAPI::errorcode ec){ promise.set_value( ec); }); return future.get(); The comment is not entirely accurate, as there is a potential for the destructor of `~promise` to start executing before `promise::set_value` has returned (crashes due to this usage have been spotted in the wild). Unlike `std::condition_variable` and the upcoming `std::latch`, `std::promise` has no such guarantee nor `fiber::promise` documents it. Instead of documenting it (your current implementation does indeed look safe), I'd suggest to use an idiom that would behave well with `std::promise`, `boost::promise` and `whatever::promise` too: [promise = std::move(promise)]... And simply drop the misleading remark. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On 9/4/2015 12:14 PM, Nat Goodspeed wrote:
Hi all,
The mini-review of Boost.Fiber by Oliver Kowalke begins today, Friday September 4th, and closes Sunday September 13th. It was reviewed in January 2014; the verdict at that time was "not in its present form." Since then Oliver has substantially improved documentation, performance, library customization and the underlying implementation, and is bringing the library back for mini-review.
The substance of the library API remains the same, which is why a mini-review is appropriate.
`packaged_task` decay-copies the task arguments, while it shouldn't. This is not only a matter of performance, or artificially augmented requirements; this can result in either subtly wrong semantics or plain compilation errors due to return type mismatch when calling a completely different function than the one intended by the user. At this point I'd like to stop spamming the list with issues in the implementation of the thread clause facilities, which obviously aren't ready yet, and suggest the author to use one or several of the exhaustive and not-so-exhaustive test suites readily available (the ones from libc++, libstdc++, Boost.Threads, HPX, etc). Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On Mon, Sep 7, 2015 at 10:53 PM, Agustín K-ballo Bergé
At this point I'd like to stop spamming the list with issues in the implementation of the thread clause facilities, which obviously aren't ready yet, and suggest the author to use one or several of the exhaustive and not-so-exhaustive test suites readily available (the ones from libc++, libstdc++, Boost.Threads, HPX, etc).
Stipulating your implementation concerns about the "Synchronization" facilities, I would very much like for you to continue providing any other feedback you may have that would not be surfaced by a test suite -- such as your comments on documentation.
On Fri, Sep 4, 2015 at 11:14 AM, Nat Goodspeed
The mini-review of Boost.Fiber by Oliver Kowalke begins today, Friday September 4th, and closes Sunday September 13th.
I'm pleased to see the level of interest in this library. Many people have contributed to the discussions so far. However, as of this moment we have no definite reviews in hand. I invite those of you who have an opinion to state explicitly whether you believe the candidate Fiber library should, or should not, be included in Boost. If, regardless of your yes/no vote, you also have ideas about how the library could/should be improved, please state them as explicitly as possible to give the library author the best chance to act. If you have already elaborated a particular suggestion in previous mail, please at least summarize and say so. Please especially note any change that you consider a requirement for library adoption. Finally -- please distinguish between "perfect" and "good enough." ;-) Nat Goodspeed Boost.Fiber Review Manager ________________________________
On 9/11/2015 11:34 AM, Nat Goodspeed wrote:
On Fri, Sep 4, 2015 at 11:14 AM, Nat Goodspeed
wrote: The mini-review of Boost.Fiber by Oliver Kowalke begins today, Friday September 4th, and closes Sunday September 13th.
I'm pleased to see the level of interest in this library. Many people have contributed to the discussions so far.
However, as of this moment we have no definite reviews in hand.
I invite those of you who have an opinion to state explicitly whether you believe the candidate Fiber library should, or should not, be included in Boost.
I have fallen out of commission for reasons outside my control and I won't be able to produce a formal review, so here is the short version: My vote is to reject the library in its current form, the reasons (most of) are scattered across the mailing list. Regrettably some of those reasons are no different than those presented on the first review, and have not been addressed in any form.
If, regardless of your yes/no vote, you also have ideas about how the library could/should be improved, please state them as explicitly as possible to give the library author the best chance to act. If you have already elaborated a particular suggestion in previous mail, please at least summarize and say so.
I'd like to see this library becoming part of Boost, so I would like to urge the author to engage in the community. Please ask for feedback way before the review process starts. If you have already had a review, follow up on each piece of feedback, explicitly state how you have addressed it, nag the reviewers to look at your solution to guarantee it matches their expectation, etc. Finally, although this carries no weight on my vote, I find the decision of making the library C++14 only extremely disappointing. The library doesn't **need** any of the new C++14 futures, it just uses them to simplify the work of the author at the cost of reducing the number of potential library users. C++11 support for this library would be almost trivial, I don't think this decision is justified. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com
On 9/11/15 9:02 AM, Agustín K-ballo Bergé wrote:
I'd like to see this library becoming part of Boost, so I would like to urge the author to engage in the community. Please ask for feedback way before the review process starts.
I think this is good advice for ALL developers who want to get a library into Boost. I realize it's easier said than done though. A main motivator for the boost library incubator is to help this along. I realize that it hasn't been particularly successful in this regard.I'm always open to suggestions on what we can do to get more people engaged in a library before it goes to formal review. Robert Ramey
2015-09-11 20:19 GMT+02:00 Robert Ramey
On 9/11/15 9:02 AM, Agustín K-ballo Bergé wrote:
I'd like to see this library becoming part of Boost, so I would like to urge the author to engage in the community. Please ask for feedback way before the review process starts.
I think this is good advice for ALL developers who want to get a library into Boost. I realize it's easier said than done though. A main motivator for the boost library incubator is to help this along. I realize that it hasn't been particularly successful in this regard.I'm always open to suggestions on what we can do to get more people engaged in a library before it goes to formal review.
boost.fiber is available at boost incubator since June 2014 - unfortunately I got not response
On 9/11/15 11:27 AM, Oliver Kowalke wrote:
2015-09-11 20:19 GMT+02:00 Robert Ramey
: On 9/11/15 9:02 AM, Agustín K-ballo Bergé wrote: boost.fiber is available at boost incubator since June 2014 - unfortunately I got not response
I know this. Sorry I didn't mention it. Robert Ramey
participants (11)
-
Agustín K-ballo Bergé
-
Giovanni Piero Deretta
-
Hartmut Kaiser
-
Nat Goodspeed
-
Oliver Kowalke
-
Paul Fultz II
-
Peter Dimov
-
Robert Ramey
-
Thomas Heller
-
TONGARI J
-
Vicente J. Botet Escriba