[thread] countdown_latch
I recently ran across the need to spawn a thread and wait for it to finish its setup before continuing. The accepted answer seems to be using a mutex and condition variable to achieve this. However that work clutters up the code quite a bit with the implementation details. I came across Java's CountDownLatch which does basically the same work but bundles it up into a tidy package. It seems to be a fairly trivial but useful abstraction. implementation: http://codepad.org/E8kd2Eb8 usage: class widget { public: widget() : latch( 1 ) , thread_( [&]{ thread_func(); } ) { latch.wait(); } private: void setup(); void run(); void thread_func() { setup(); latch.count_down(); run(); } countdown_latch latch; std::thread thread_; };
Le 21/04/13 10:27, Michael Marcin a écrit :
I recently ran across the need to spawn a thread and wait for it to finish its setup before continuing.
The accepted answer seems to be using a mutex and condition variable to achieve this.
However that work clutters up the code quite a bit with the implementation details.
I came across Java's CountDownLatch which does basically the same work but bundles it up into a tidy package.
What about using boost::barrier [1]? Best, Vicente [1] http://www.boost.org/doc/libs/1_53_0/doc/html/thread/synchronization.html#th...
On Apr 21, 2013, at 6:45 AM, "Vicente J. Botet Escriba"
Le 21/04/13 10:27, Michael Marcin a écrit :
I recently ran across the need to spawn a thread and wait for it to finish its setup before continuing.
The accepted answer seems to be using a mutex and condition variable to achieve this.
However that work clutters up the code quite a bit with the implementation details.
I came across Java's CountDownLatch which does basically the same work but bundles it up into a tidy package.
What about using boost::barrier
Of course. I'd forgotten Boost.Thread had that. I wish the standard library had included it. ___ Rob (Sent from my portable computation engine)
On 4/21/2013 5:45 AM, Vicente J. Botet Escriba wrote:
Le 21/04/13 10:27, Michael Marcin a écrit :
I recently ran across the need to spawn a thread and wait for it to finish its setup before continuing.
The accepted answer seems to be using a mutex and condition variable to achieve this.
However that work clutters up the code quite a bit with the implementation details.
I came across Java's CountDownLatch which does basically the same work but bundles it up into a tidy package.
What about using boost::barrier [1]?
Best, Vicente
Ah didn't know that existed. Looks like it serves a similar purpose. Still there are a few differences. Looks like barrier is a bit fatter to let it handle restarting on a new generation. Usage requires you to know the number of people that are going to call wait at construction time as well. IF you replaced the countdown_latch with barrier in my example you introduce more synchronization than is necessary. In addition to the constructor waiting on thread_func now thread_func must wait for the constructor.
Le 21/04/13 13:18, Michael Marcin a écrit :
On 4/21/2013 5:45 AM, Vicente J. Botet Escriba wrote:
Le 21/04/13 10:27, Michael Marcin a écrit :
I recently ran across the need to spawn a thread and wait for it to finish its setup before continuing.
The accepted answer seems to be using a mutex and condition variable to achieve this.
However that work clutters up the code quite a bit with the implementation details.
I came across Java's CountDownLatch which does basically the same work but bundles it up into a tidy package.
What about using boost::barrier [1]?
Best, Vicente
Ah didn't know that existed. Looks like it serves a similar purpose.
Still there are a few differences.
Looks like barrier is a bit fatter to let it handle restarting on a new generation.
Usage requires you to know the number of people that are going to call wait at construction time as well.
IF you replaced the countdown_latch with barrier in my example you introduce more synchronization than is necessary. In addition to the constructor waiting on thread_func now thread_func must wait for the constructor.
You are right. After further analysis the split of the wait()
count_down() of the latch class could be complementary to the current
boost::barrier class.
The difference been that with a latch the thread will not block on (2)
while using a barrier would synchronize all the threads in (1) and (2)
as the barrier::wait() is equivalent to a latch::count_down() and
latch::wait()
There is a C++1y proposal [1] that propose having two classes. However
having two classes that are really quite close seems not desirable. The
problem is that boost::barrier use already a wait() function that do the
count down and synchronize up to the count is zero.
I guess that the better would be to define a new set of latch classes
that provides wait/count_down/count_down_and_wait. The current barrier
class could be deprecated once the new classes are ready.
[1] adds the possibility to reset the counter. This doesn't seems to add
any complexity to the basic latch
[1] adds the possibility to set a function that is called when the
counter reach the value 0. This is in my opinion useful but would need
an additional class. I don't think the names latch and barrier would be
the good ones if the single difference is to be able to set this function.
boost::barrier auto reset itself once the counter reaches the value zero.
I would propose 2/3 classes
Basic Latch respond to your needs and adds some more function that don't
make the implementation less efficient.
class latch
{
public:
latch( latch const&) = delete;
latch& operator=( latch const&) = delete;
/// Constructs a latch with a given count.
latch( std::size_t count );
/// Blocks until the latch has counted down to zero.
void wait();
bool try_wait();
template
On 4/21/2013 8:55 AM, Vicente J. Botet Escriba wrote:
Le 21/04/13 13:18, Michael Marcin a écrit :
On 4/21/2013 5:45 AM, Vicente J. Botet Escriba wrote:
Le 21/04/13 10:27, Michael Marcin a écrit :
I recently ran across the need to spawn a thread and wait for it to finish its setup before continuing.
The accepted answer seems to be using a mutex and condition variable to achieve this.
However that work clutters up the code quite a bit with the implementation details.
I came across Java's CountDownLatch which does basically the same work but bundles it up into a tidy package.
What about using boost::barrier [1]?
Best, Vicente
Ah didn't know that existed. Looks like it serves a similar purpose.
Still there are a few differences.
Looks like barrier is a bit fatter to let it handle restarting on a new generation.
Usage requires you to know the number of people that are going to call wait at construction time as well.
IF you replaced the countdown_latch with barrier in my example you introduce more synchronization than is necessary. In addition to the constructor waiting on thread_func now thread_func must wait for the constructor.
You are right. After further analysis the split of the wait() count_down() of the latch class could be complementary to the current boost::barrier class.
The difference been that with a latch the thread will not block on (2) while using a barrier would synchronize all the threads in (1) and (2) as the barrier::wait() is equivalent to a latch::count_down() and latch::wait()
There is a C++1y proposal [1] that propose having two classes. However having two classes that are really quite close seems not desirable. The problem is that boost::barrier use already a wait() function that do the count down and synchronize up to the count is zero.
I guess that the better would be to define a new set of latch classes that provides wait/count_down/count_down_and_wait. The current barrier class could be deprecated once the new classes are ready.
[1] adds the possibility to reset the counter. This doesn't seems to add any complexity to the basic latch
[1] adds the possibility to set a function that is called when the counter reach the value 0. This is in my opinion useful but would need an additional class. I don't think the names latch and barrier would be the good ones if the single difference is to be able to set this function.
boost::barrier auto reset itself once the counter reaches the value zero.
I would propose 2/3 classes
Basic Latch respond to your needs and adds some more function that don't make the implementation less efficient.
class latch { public: latch( latch const&) = delete; latch& operator=( latch const&) = delete;
/// Constructs a latch with a given count. latch( std::size_t count );
/// Blocks until the latch has counted down to zero. void wait();
bool try_wait();
template
cv_statuswait_for( const chrono::duration & rel_time ); template cv_status wait_until( const chrono::time_point & abs_time ); /// Decrement the count and notify anyone waiting if we reach zero. /// @Requires count must be greater than 0 void count_down();
/// Decrement the count and notify anyone waiting if we reach zero. /// Blocks until the latch has counted down to zero. /// @Requires count must be greater than 0 void count_down_and_wait();
/// Reset the counter /// #Requires This method may only be invoked when there are no other threads currently inside the|count_down_and_wait()| method.
void reset(std::size_t count_ );
};
A completion latch has in addition to its internal counter a completion function that will be invoked when the counter reaches zero. The completion function is any nullary function returning nothing.
class completion_latch { public: typedef 'implementation defined' completion_function; static const completion_function noop;
completion_latch( completion_latch const& ) = delete; completion_latch& operator=( completion_latch const& ) = delete;
/// Constructs a latch with a given count and a noop completion function. completion_latch( std::size_t count);
/// Constructs a latch with a given count and a completion function. template <typename F> completion_latch( std::size_t count, F&& fct);
/// Blocks until the latch has counted down to zero. void wait(); bool try_wait(); template
cv_status wait_for( const chrono::duration & rel_time ); template cv_status wait_until( const chrono::time_point & abs_time ); /// Decrement the count and notify anyone waiting if we reach zero. /// @Requires count must be greater than 0 or undefined behavior void count_down();
/// Decrement the count and notify anyone waiting if we reach zero. /// Blocks until the latch has counted down to zero. /// @Requires count must be greater than 0 void count_down_and_wait();
/// Reset the counter with a new value for the initial count. /// #Requires This method may only be invoked when there are no other threads /// currently inside the count_down and wait related functions. /// It may also be invoked from within the registered completion function.
void reset( std::size_t count );
/// Resets the latch with the new completion function. /// The next time the internal count reaches 0, this function will be invoked. /// #Requires This method may only be invoked when there are no other threads /// currently inside the count_down and wait related functions. /// It may also be invoked from within the registered completion function. /// Returns the old completion function if any or noop if template typename F> completion_function then(F&&);
};
Optionally we could add a Cyclic latch provides the same interface than latch but that reset itself when zero is reached (as boost::barrier). This would be more efficient than been forced to add a completion function that reset the counter.
What do you think of these interfaces?
Best, Vicente
[1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3600.html
I'm not a threading guru so forgive me if I'm totally off. /// Resets the latch with the new completion function. /// The next time the internal count reaches 0, this function will be invoked. /// #Requires This method may only be invoked when there are no other threads /// currently inside the count_down and wait related functions. /// It may also be invoked from within the registered completion function. /// Returns the old completion function if any or noop if How is it that this function can be invoked with no threads inside count_down and wait related functions? bool count_down_and_wait() { boost::unique_lockboost::mutex lock(m_mutex); unsigned int gen = m_generation; if (--m_count == 0) { m_generation++; 1) completion_func(); < waiters are stil waiting m_cond.notify_all(); 2) completion_func(); < waiters are blocked trying to acquire the lock lock.unlock(); 3) completion_func(); < waiters could be all gone, but I don't think this guaranteed return true; } while (gen == m_generation) m_cond.wait(lock); return false; }
Le 21/04/13 20:06, Michael Marcin a écrit :
On 4/21/2013 8:55 AM, Vicente J. Botet Escriba wrote:
Le 21/04/13 13:18, Michael Marcin a écrit :
On 4/21/2013 5:45 AM, Vicente J. Botet Escriba wrote:
Le 21/04/13 10:27, Michael Marcin a écrit :
I recently ran across the need to spawn a thread and wait for it to finish its setup before continuing.
The accepted answer seems to be using a mutex and condition variable to achieve this.
However that work clutters up the code quite a bit with the implementation details.
I came across Java's CountDownLatch which does basically the same work but bundles it up into a tidy package.
What about using boost::barrier [1]?
Best, Vicente
Ah didn't know that existed. Looks like it serves a similar purpose.
Still there are a few differences.
Looks like barrier is a bit fatter to let it handle restarting on a new generation.
Usage requires you to know the number of people that are going to call wait at construction time as well.
IF you replaced the countdown_latch with barrier in my example you introduce more synchronization than is necessary. In addition to the constructor waiting on thread_func now thread_func must wait for the constructor.
You are right. After further analysis the split of the wait() count_down() of the latch class could be complementary to the current boost::barrier class.
The difference been that with a latch the thread will not block on (2) while using a barrier would synchronize all the threads in (1) and (2) as the barrier::wait() is equivalent to a latch::count_down() and latch::wait()
There is a C++1y proposal [1] that propose having two classes. However having two classes that are really quite close seems not desirable. The problem is that boost::barrier use already a wait() function that do the count down and synchronize up to the count is zero.
I guess that the better would be to define a new set of latch classes that provides wait/count_down/count_down_and_wait. The current barrier class could be deprecated once the new classes are ready.
[1] adds the possibility to reset the counter. This doesn't seems to add any complexity to the basic latch
[1] adds the possibility to set a function that is called when the counter reach the value 0. This is in my opinion useful but would need an additional class. I don't think the names latch and barrier would be the good ones if the single difference is to be able to set this function.
boost::barrier auto reset itself once the counter reaches the value zero.
I would propose 2/3 classes
Basic Latch respond to your needs and adds some more function that don't make the implementation less efficient.
class latch { public: latch( latch const&) = delete; latch& operator=( latch const&) = delete;
/// Constructs a latch with a given count. latch( std::size_t count );
/// Blocks until the latch has counted down to zero. void wait();
bool try_wait();
template
cv_statuswait_for( const chrono::duration & rel_time ); template cv_status wait_until( const chrono::time_point & abs_time ); /// Decrement the count and notify anyone waiting if we reach zero. /// @Requires count must be greater than 0 void count_down();
/// Decrement the count and notify anyone waiting if we reach zero. /// Blocks until the latch has counted down to zero. /// @Requires count must be greater than 0 void count_down_and_wait();
/// Reset the counter /// #Requires This method may only be invoked when there are no other threads currently inside the|count_down_and_wait()| method.
void reset(std::size_t count_ );
};
A completion latch has in addition to its internal counter a completion function that will be invoked when the counter reaches zero. The completion function is any nullary function returning nothing.
class completion_latch { public: typedef 'implementation defined' completion_function; static const completion_function noop;
completion_latch( completion_latch const& ) = delete; completion_latch& operator=( completion_latch const& ) = delete;
/// Constructs a latch with a given count and a noop completion function. completion_latch( std::size_t count);
/// Constructs a latch with a given count and a completion function. template <typename F> completion_latch( std::size_t count, F&& fct);
/// Blocks until the latch has counted down to zero. void wait(); bool try_wait(); template
cv_status wait_for( const chrono::duration & rel_time ); template cv_status wait_until( const chrono::time_point & abs_time ); /// Decrement the count and notify anyone waiting if we reach zero. /// @Requires count must be greater than 0 or undefined behavior void count_down();
/// Decrement the count and notify anyone waiting if we reach zero. /// Blocks until the latch has counted down to zero. /// @Requires count must be greater than 0 void count_down_and_wait();
/// Reset the counter with a new value for the initial count. /// #Requires This method may only be invoked when there are no other threads /// currently inside the count_down and wait related functions. /// It may also be invoked from within the registered completion function.
void reset( std::size_t count );
/// Resets the latch with the new completion function. /// The next time the internal count reaches 0, this function will be invoked. /// #Requires This method may only be invoked when there are no other threads /// currently inside the count_down and wait related functions. /// It may also be invoked from within the registered completion function. /// Returns the old completion function if any or noop if template typename F> completion_function then(F&&);
};
Optionally we could add a Cyclic latch provides the same interface than latch but that reset itself when zero is reached (as boost::barrier). This would be more efficient than been forced to add a completion function that reset the counter.
What do you think of these interfaces?
Best, Vicente
[1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3600.html
I'm not a threading guru so forgive me if I'm totally off.
/// Resets the latch with the new completion function. /// The next time the internal count reaches 0, this function will be invoked. /// #Requires This method may only be invoked when there are no other threads /// currently inside the count_down and wait related functions. /// It may also be invoked from within the registered completion function. /// Returns the old completion function if any or noop if
How is it that this function can be invoked with no threads inside count_down and wait related functions?
bool count_down_and_wait() { boost::unique_lockboost::mutex lock(m_mutex); unsigned int gen = m_generation;
if (--m_count == 0) { m_generation++; 1) completion_func(); < waiters are stil waiting m_cond.notify_all(); 2) completion_func(); < waiters are blocked trying to acquire the lock
lock.unlock(); 3) completion_func(); < waiters could be all gone, but I don't think this guaranteed return true; }
while (gen == m_generation) m_cond.wait(lock); return false; }
You are surely right. My first approach would be option 3, but it has a lot of troubles as you note. I suspect that the completion_latch needs a latch internally to ensure that all the waiters have been finished. The best would be to prototype these ideas and see what can be done. Vicente
On 4/21/2013 4:53 PM, Vicente J. Botet Escriba wrote:
You are surely right. My first approach would be option 3, but it has a lot of troubles as you note. I suspect that the completion_latch needs a latch internally to ensure that all the waiters have been finished.
The best would be to prototype these ideas and see what can be done.
Back to the original example with widget for a second whether it is a latch or a barrier it is only used on construction but then kept around as member for the entire lifetime of the object. This seems unnecessary but also seems to be the pattern I've seen everywhere. Would it be better to do something like: class widget { public: widget() { countdown_latch latch(1); thread_ = std::thread( [&]{ thread_func( latch ); } latch.wait(); } private: void setup(); void run(); void thread_func( countdown_latch& latch ) { setup(); latch.count_down(); // from here on latch is a dangling reference don't use run(); } std::thread thread_; }; Or is there something cleaner?
Le 22/04/13 00:55, Michael Marcin a écrit :
On 4/21/2013 4:53 PM, Vicente J. Botet Escriba wrote:
You are surely right. My first approach would be option 3, but it has a lot of troubles as you note. I suspect that the completion_latch needs a latch internally to ensure that all the waiters have been finished.
The best would be to prototype these ideas and see what can be done.
Back to the original example with widget for a second whether it is a latch or a barrier it is only used on construction but then kept around as member for the entire lifetime of the object.
This seems unnecessary but also seems to be the pattern I've seen everywhere.
Would it be better to do something like:
class widget { public: widget() { countdown_latch latch(1); thread_ = std::thread( [&]{ thread_func( latch ); } latch.wait(); }
private: void setup(); void run();
void thread_func( countdown_latch& latch ) { setup(); latch.count_down(); // from here on latch is a dangling reference don't use
run(); }
std::thread thread_; };
Or is there something cleaner?
++ It reduce the size of the widget. - dangling reference (but that can be mastered. I don't know your constraints: Why do you need to run the setup on the new created thread? Could it be executed directly on the widget constructor? If yes class widget { public: widget() { setup(); thread_ = std::thread( [&]{ run( ); } ); } private: void setup(); void run(); std::thread thread_; }; Do you need to manage the thread? if not, detaching the thread allows you to don't include it on the widget context. class widget { public: widget() { ... std::thread( [&]{ thread_func( latch ); }).detach(); ... } private: void setup(); void run(); }; Best, Vicente
On 4/21/2013 11:30 PM, Vicente J. Botet Escriba wrote:
++ It reduce the size of the widget. - dangling reference (but that can be mastered.
It also allows latch to be forward declared and latch mutex and condition variable are only defined in the source file. Do mutexes and condition variables take up system resources other than plain memory? Can they be exhausted? If it's just a small amount of memory on a not-often instantiated class I guess it's not worth the oddity of a dangling reference. Although a latch member is a bit silly if it can never be reset.
I don't know your constraints: Why do you need to run the setup on the new created thread? Could it be executed directly on the widget constructor? If yes
Maybe it's time to get a bit more concrete. In the real scenario widget is a win32_window which wants to run its message pump in a worker thread. A win32 message queue must run on the thread that created the window hence setup must happen on the thread.
Do you need to manage the thread? if not, detaching the thread allows you to don't include it on the widget context.
I post a quit message and join in the destructor. Thanks for your input. Back on topic of latches and barriers. I was reading up on Java's CyclicBarrier and found its await return value pretty interesting and probably fairly free to implement. "...each invocation of await() returns the arrival index of that thread at the barrier." "the arrival index of the current thread, where index getParties() - 1 indicates the first to arrive and zero indicates the last to arrive." To do this in boost::thread::barrier you would just have to cache m_count right after the decrement and return it instead of the current true/false.
Le 22/04/13 10:16, Michael Marcin a écrit :
On 4/21/2013 11:30 PM, Vicente J. Botet Escriba wrote:
Hi,
Back on topic of latches and barriers. I was reading up on Java's CyclicBarrier and found its await return value pretty interesting and probably fairly free to implement.
"...each invocation of await() returns the arrival index of that thread at the barrier."
"the arrival index of the current thread, where index getParties() - 1 indicates the first to arrive and zero indicates the last to arrive."
To do this in boost::thread::barrier you would just have to cache m_count right after the decrement and return it instead of the current true/false.
Yes, boost::barrier return true for the last wait. The change would be quite simple, but what would be he utility of such an index? I have implemented a first prototype of a latch class that could be used also as a barrier. I don't know if the name is the good one then. I have also implemented a completion_latch class that is able to call a function when the counter reaches zero. I'm not satisfied with the current implementation as it needs a lot of synchronization. The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/ Please let me know what do you think. Best, Vicente
On 4/27/2013 2:05 AM, Vicente J. Botet Escriba wrote:
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
I think in latch your wait_for and wait_until don't work correctly in
the case of spurious wakes.
The easy answer to this is probably use the predicate versions, which
will also handle the technicalities of turning wait_for into wait_until
during loops caused by spurious wakes.
like:
struct count_not_zero
{
count_not_zero(const std::size_t& count_) : count(count_) {}
bool operator()() const { return count != 0; }
const std::size_t& count;
};
template
Le 27/04/13 09:51, Michael Marcin a écrit :
On 4/27/2013 2:05 AM, Vicente J. Botet Escriba wrote:
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
I think in latch your wait_for and wait_until don't work correctly in the case of spurious wakes. You are right, my implementation needs a loop (I didn't tested it).
The easy answer to this is probably use the predicate versions, which will also handle the technicalities of turning wait_for into wait_until during loops caused by spurious wakes.
like:
struct count_not_zero { count_not_zero(const std::size_t& count_) : count(count_) {} bool operator()() const { return count != 0; } const std::size_t& count; };
template
cv_status wait_for(const chrono::duration & rel_time) { boost::unique_lockboost::mutex lk(mutex_); return cond_.wait_for(rel_time, count_not_zero(count_)); } template
cv_status wait_until(const chrono::time_point & abs_time) { boost::unique_lockboost::mutex lk(mutex_); return cond_.wait_until(abs_time, count_not_zero(count_)); }
Yes this would work much better ;-)
You should probably also add a latch_any to go along with condition_variable_any.
Why? the use of the condition_variable is an internal detail and more efficient than condition_variable_any. How latch_any could profit of an internal change? Thanks, Vicente
Just came across
http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3600.html
C++ Latches and Barriers
Synopsis
The synopsis is as follows.
class latch {
public:
explicit latch(size_t count);
~latch();
void count_down();
void wait();
bool try_wait();
void count_down_and_wait();
};
class barrier {
public:
explicit barrier(size_t num_threads) throw (std::invalid_argument);
explicit barrier(size_t num_threads, std::function
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
Is the explicit lk.unlock() inside the bool count_down(unique_lock<mutex> &lk); needed ? I see the two places where it's called are: void count_down() { boost::unique_lockboost::mutex lk(mutex_); count_down(lk); } and void count_down_and_wait() { boost::unique_lockboost::mutex lk(mutex_); if (count_down(lk)) { return; } count_.cond_.wait(lk, detail::counter_is_zero(count_)); } in both cases the unique_lock does the job, or I'm missing something? Also I would add a new method to boost::detail::counter bool dec_and_notify_all_if_value(const std::size a_value) { if (--value_ == a_value) { cond_.notify_all(); return true; } return false; } This way you can further simplify the count_down(unique_lock<mutex> &lk) method: //lk is here only to assure it's a thread safe call bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_.value_ > 0) { BOOST_ASSERT(count_.value_ > 0); return count_.dec_and_notify_all_if_value(0); } Regards Gaetano Mendola
Le 11/05/13 23:25, Gaetano Mendola a écrit :
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
Is the explicit lk.unlock() inside the
bool count_down(unique_lock<mutex> &lk);
needed ?
I see the two places where it's called are:
void count_down() { boost::unique_lockboost::mutex lk(mutex_); count_down(lk); }
and
void count_down_and_wait() { boost::unique_lockboost::mutex lk(mutex_); if (count_down(lk)) { return; } count_.cond_.wait(lk, detail::counter_is_zero(count_)); }
in both cases the unique_lock does the job, or I'm missing something? Not really. It is an optimization. The mutex don't need to be lock when
the condition is notified.
Also I would add a new method to boost::detail::counter
bool dec_and_notify_all_if_value(const std::size a_value) { if (--value_ == a_value) { cond_.notify_all(); return true; } return false; }
This way you can further simplify the count_down(unique_lock<mutex> &lk) method:
//lk is here only to assure it's a thread safe call bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_.value_ > 0) { BOOST_ASSERT(count_.value_ > 0); return count_.dec_and_notify_all_if_value(0); }
When this function would be used? Best, Vicente
On 12/05/2013 01.03, Vicente J. Botet Escriba wrote:
Le 11/05/13 23:25, Gaetano Mendola a écrit :
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
Is the explicit lk.unlock() inside the
bool count_down(unique_lock<mutex> &lk);
needed ?
I see the two places where it's called are:
void count_down() { boost::unique_lockboost::mutex lk(mutex_); count_down(lk); }
and
void count_down_and_wait() { boost::unique_lockboost::mutex lk(mutex_); if (count_down(lk)) { return; } count_.cond_.wait(lk, detail::counter_is_zero(count_)); }
in both cases the unique_lock does the job, or I'm missing something? Not really. It is an optimization.
Let me understand, instead of: - long jump - unlock inside boost::~unique_lock with your optimization the following will happen:optimization: - explicit unlock - long jump - boost::~unique_lock is that optimization realy worth ? I mean a mutex unlock is hundred of instructions, involving even a syscall, while a long jump is just a long jump. Do you have any evidence of the gain you have doing so? Also while we talk about optimization why not use lock_guard when possible ?
The mutex don't need to be lock when the condition is notified.
Of course is not needed, but my point was another one.
Also I would add a new method to boost::detail::counter
bool dec_and_notify_all_if_value(const std::size a_value) { if (--value_ == a_value) { cond_.notify_all(); return true; } return false; }
This way you can further simplify the count_down(unique_lock<mutex> &lk) method:
//lk is here only to assure it's a thread safe call bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_.value_ > 0) { BOOST_ASSERT(count_.value_ > 0); return count_.dec_and_notify_all_if_value(0); }
When this function would be used?
Look at the new version of bool count_down(unique_lock<mutex> &lk); I have expressed above. Regards Gaetano Mendola
Le 13/05/13 21:05, Gaetano Mendola a écrit :
On 12/05/2013 01.03, Vicente J. Botet Escriba wrote:
Le 11/05/13 23:25, Gaetano Mendola a écrit :
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
Is the explicit lk.unlock() inside the
bool count_down(unique_lock<mutex> &lk);
needed ?
I see the two places where it's called are:
void count_down() { boost::unique_lockboost::mutex lk(mutex_); count_down(lk); }
and
void count_down_and_wait() { boost::unique_lockboost::mutex lk(mutex_); if (count_down(lk)) { return; } count_.cond_.wait(lk, detail::counter_is_zero(count_)); }
in both cases the unique_lock does the job, or I'm missing something? Not really. It is an optimization.
Let me understand, instead of:
- long jump - unlock inside boost::~unique_lock
with your optimization the following will happen:optimization:
- explicit unlock - long jump - boost::~unique_lock
is that optimization realy worth ? I mean a mutex unlock is hundred of instructions, involving even a syscall, while a long jump is just a long jump.
Do you have any evidence of the gain you have doing so?
Also while we talk about optimization why not use lock_guard when possible ?
The mutex don't need to be lock when the condition is notified.
Of course is not needed, but my point was another one.
I didn't expressed myself clearly. The optimization consist in unlocking as soon as possible not in executing less instructions.
Also I would add a new method to boost::detail::counter
bool dec_and_notify_all_if_value(const std::size a_value) { if (--value_ == a_value) { cond_.notify_all(); return true; } return false; }
This way you can further simplify the count_down(unique_lock<mutex> &lk) method:
//lk is here only to assure it's a thread safe call bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_.value_ > 0) { BOOST_ASSERT(count_.value_ > 0); return count_.dec_and_notify_all_if_value(0); }
When this function would be used?
Look at the new version of bool count_down(unique_lock<mutex> &lk); I have expressed above.
I meant in addition to the use in count_down. Best, Vicente
On 13/05/2013 23.13, Vicente J. Botet Escriba wrote:
Le 13/05/13 21:05, Gaetano Mendola a écrit :
On 12/05/2013 01.03, Vicente J. Botet Escriba wrote:
Le 11/05/13 23:25, Gaetano Mendola a écrit :
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
Is the explicit lk.unlock() inside the
bool count_down(unique_lock<mutex> &lk);
needed ?
I see the two places where it's called are:
void count_down() { boost::unique_lockboost::mutex lk(mutex_); count_down(lk); }
and
void count_down_and_wait() { boost::unique_lockboost::mutex lk(mutex_); if (count_down(lk)) { return; } count_.cond_.wait(lk, detail::counter_is_zero(count_)); }
in both cases the unique_lock does the job, or I'm missing something? Not really. It is an optimization.
Let me understand, instead of:
- long jump - unlock inside boost::~unique_lock
with your optimization the following will happen:optimization:
- explicit unlock - long jump - boost::~unique_lock
is that optimization realy worth ? I mean a mutex unlock is hundred of instructions, involving even a syscall, while a long jump is just a long jump.
Do you have any evidence of the gain you have doing so?
Also while we talk about optimization why not use lock_guard when possible ?
The mutex don't need to be lock when the condition is notified.
Of course is not needed, but my point was another one.
I didn't expressed myself clearly. The optimization consist in unlocking as soon as possible not in executing less instructions.
Unlocking as soon as possible is something it should happen whenever is possible, but basically you are doing this kind of optimization: instead of void foo() { boost::unique_lockboost::mutex lk(mutex_); .... return; } you are optimizing doing this: void foo() { boost::unique_lockboost::mutex lk(mutex_); ... lk.unlock; return; } and I'm asking, is it worth? Do you have an evidence of it ? Having say that do whatever you believe is better in terms of clarity and correctness. I have seen also that in a couple of points the unique_lock can be replaced by a lock_guard (and possibly declared const).
I meant in addition to the use in count_down.
At the moment only in there. Regards Gaetano Mendola
Le 14/05/13 00:44, Gaetano Mendola a écrit :
Unlocking as soon as possible is something it should happen whenever is possible, but basically you are doing this kind of optimization:
instead of
void foo() { boost::unique_lockboost::mutex lk(mutex_); .... return; }
you are optimizing doing this:
void foo() { boost::unique_lockboost::mutex lk(mutex_); ... lk.unlock; return; }
and I'm asking, is it worth? Do you have an evidence of it ? Having say that do whatever you believe is better in terms of clarity and correctness. I have seen also that in a couple of points the unique_lock can be replaced by a lock_guard (and possibly declared const).
Sorry Gaetano, I have an uncommitted version where bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_.value_ > 0) { BOOST_ASSERT(count_.value_ > 0); if (--count_.value_ == 0) { lk.unlock(); count_.cond_.notify_all(); // unlocked !!! return true; } return false; } Maybe I'm wrong and I'm doing premature optimization and it is better to use lock_guard and don't unlock before notifying. As you point it is in any case clearer. Measures would be needed :( Thanks for your interest, Vicente
On 14/05/2013 07.46, Vicente J. Botet Escriba wrote:
Le 14/05/13 00:44, Gaetano Mendola a écrit :
Unlocking as soon as possible is something it should happen whenever is possible, but basically you are doing this kind of optimization:
instead of
void foo() { boost::unique_lockboost::mutex lk(mutex_); .... return; }
you are optimizing doing this:
void foo() { boost::unique_lockboost::mutex lk(mutex_); ... lk.unlock; return; }
and I'm asking, is it worth? Do you have an evidence of it ? Having say that do whatever you believe is better in terms of clarity and correctness. I have seen also that in a couple of points the unique_lock can be replaced by a lock_guard (and possibly declared const).
Sorry Gaetano,
I have an uncommitted version where
bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_.value_ > 0) { BOOST_ASSERT(count_.value_ > 0); if (--count_.value_ == 0) { lk.unlock(); count_.cond_.notify_all(); // unlocked !!! return true; } return false; }
Maybe I'm wrong and I'm doing premature optimization and it is better to use lock_guard and don't unlock before notifying. As you point it is in any case clearer.
Now it makes sense, unlocking prematurely an unique lock without the use of an extra scope is something not natural, it is at least a weird pattern, I believe the source of the "problem" is in the implementation of that boost::detail::counter indeed as you can see it has a counter and a condition and users of it are obliged to protect both counter and condition (even if not needed) look at inc_and_notify_all for example.
Measures would be needed :(
Not worth to measure it, I asked that because I saw an unusual "optimization" and it was interesting to know if you did that due your past experience with it. Regard Gaetano Mendola
Le 14/05/13 08:16, Gaetano Mendola a écrit :
On 14/05/2013 07.46, Vicente J. Botet Escriba wrote:
Le 14/05/13 00:44, Gaetano Mendola a écrit :
Unlocking as soon as possible is something it should happen whenever is possible, but basically you are doing this kind of optimization:
instead of
void foo() { boost::unique_lockboost::mutex lk(mutex_); .... return; }
you are optimizing doing this:
void foo() { boost::unique_lockboost::mutex lk(mutex_); ... lk.unlock; return; }
and I'm asking, is it worth? Do you have an evidence of it ? Having say that do whatever you believe is better in terms of clarity and correctness. I have seen also that in a couple of points the unique_lock can be replaced by a lock_guard (and possibly declared const).
Sorry Gaetano,
I have an uncommitted version where
bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_.value_ > 0) { BOOST_ASSERT(count_.value_ > 0); if (--count_.value_ == 0) { lk.unlock(); count_.cond_.notify_all(); // unlocked !!! return true; } return false; }
Maybe I'm wrong and I'm doing premature optimization and it is better to use lock_guard and don't unlock before notifying. As you point it is in any case clearer.
Now it makes sense, unlocking prematurely an unique lock without the use of an extra scope is something not natural, it is at least a weird pattern, I believe the source of the "problem" is in the implementation of that boost::detail::counter indeed as you can see it has a counter and a condition and users of it are obliged to protect both counter and condition (even if not needed) look at inc_and_notify_all for example.
To which boost::detail::counter are you referring to? Best, Vicente
Le 17/05/13 22:15, Vicente J. Botet Escriba a écrit :
Le 14/05/13 08:16, Gaetano Mendola a écrit :
On 14/05/2013 07.46, Vicente J. Botet Escriba wrote:
Now it makes sense, unlocking prematurely an unique lock without the use of an extra scope is something not natural, it is at least a weird pattern, I believe the source of the "problem" is in the implementation of that boost::detail::counter indeed as you can see it has a counter and a condition and users of it are obliged to protect both counter and condition (even if not needed) look at inc_and_notify_all for example.
To which boost::detail::counter are you referring to?
Forget this question :( Vicente
On 5/14/2013 12:46 AM, Vicente J. Botet Escriba wrote:
I have an uncommitted version where
bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_.value_ > 0) { BOOST_ASSERT(count_.value_ > 0); if (--count_.value_ == 0) { lk.unlock(); count_.cond_.notify_all(); // unlocked !!! return true; } return false; }
Maybe I'm wrong and I'm doing premature optimization and it is better to use lock_guard and don't unlock before notifying. As you point it is in any case clearer.
Measures would be needed :(
Thanks for your interest,
bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_.value_ > 0) { BOOST_ASSERT(count_.value_ > 0); if (--count_.value_ == 0) { lk.unlock(); ---> interleave here <--- count_.cond_.notify_all(); // unlocked !!! return true; } return false; } What can happen with the explicit unlock? If waiting threads wake spuriously and interleave where marked above I suppose they just acquire the lock see that the counter is zero and return. Since that thread is no longer waiting on the cv it is no longer notified by notify_all. The count_down thread has to execute slightly more code (the difference between unique_lock and lock_guard). What can happen without the explicit unlock? We'll notify the waiting threads but we still have the mutex locked. I think worst case they wake immediately, fail to acquire the lock and immediately go back to sleep. I've been led to believe that this sleep shouldn't really happen, rather the threads should spinlock for the short time until the lock is released in the unique_lock destructor. Additionally there seems to be no guarantee the waiting threads will even wake before the unique_lock destructor. The first seems better but I am no an expert. Looking around I've found a couple of other c++ countdown latch implementations none of them seem to do this which may or more not mean anything.
On 05/14/2013 08:34 AM, Michael Marcin wrote:
On 5/14/2013 12:46 AM, Vicente J. Botet Escriba wrote:
I have an uncommitted version where
bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_.value_ > 0) { BOOST_ASSERT(count_.value_ > 0); if (--count_.value_ == 0) { lk.unlock(); count_.cond_.notify_all(); // unlocked !!! return true; } return false; }
Maybe I'm wrong and I'm doing premature optimization and it is better to use lock_guard and don't unlock before notifying. As you point it is in any case clearer.
Measures would be needed :(
Thanks for your interest,
bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_.value_ > 0) { BOOST_ASSERT(count_.value_ > 0); if (--count_.value_ == 0) { lk.unlock(); ---> interleave here <--- count_.cond_.notify_all(); // unlocked !!! return true; } return false; }
What can happen with the explicit unlock? If waiting threads wake spuriously and interleave where marked above I suppose they just acquire the lock see that the counter is zero and return. Since that thread is no longer waiting on the cv it is no longer notified by notify_all. The count_down thread has to execute slightly more code (the difference between unique_lock and lock_guard).
What can happen without the explicit unlock? We'll notify the waiting threads but we still have the mutex locked. I think worst case they wake immediately, fail to acquire the lock and immediately go back to sleep. I've been led to believe that this sleep shouldn't really happen, rather the threads should spinlock for the short time until the lock is released in the unique_lock destructor. Additionally there seems to be no guarantee the waiting threads will even wake before the unique_lock destructor.
The first seems better but I am no an expert. Looking around I've found a couple of other c++ countdown latch implementations none of them seem to do this which may or more not mean anything.
You analysis is correct, and as you said the first version is preferable. I still believe those doubts are due the fact the boost::detail::counter that is not thread safe carries a condition that doesn't need to be protected hence in case like this you have the multiple choice of implementation "unlock then notify" or "notify then unlock", the thumb rule is to minimize the lock/unlock window, and in this case due the fact the condition doesn't need to be protect while the counter is then an unlock before the condition notify is preferable. Regards Gaetano Mendola
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
Le 22/04/13 10:16, Michael Marcin a écrit :
On 4/21/2013 11:30 PM, Vicente J. Botet Escriba wrote:
Hi,
Back on topic of latches and barriers. I was reading up on Java's CyclicBarrier and found its await return value pretty interesting and probably fairly free to implement.
"...each invocation of await() returns the arrival index of that thread at the barrier."
"the arrival index of the current thread, where index getParties() - 1 indicates the first to arrive and zero indicates the last to arrive."
To do this in boost::thread::barrier you would just have to cache m_count right after the decrement and return it instead of the current true/false.
Yes, boost::barrier return true for the last wait. The change would be quite simple, but what would be he utility of such an index?
I have implemented a first prototype of a latch class that could be used also as a barrier. I don't know if the name is the good one then. I have also implemented a completion_latch class that is able to call a function when the counter reaches zero. I'm not satisfied with the current implementation as it needs a lot of synchronization.
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
Java implementation of CountDownLatch permits count_down to be called multiple time even if the counter has reached 0, in that case nothing will happen, in your implementation the count > 0 is a precondition, I would implement has described by Java Regards Gaetano Mendola
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
Le 22/04/13 10:16, Michael Marcin a écrit :
On 4/21/2013 11:30 PM, Vicente J. Botet Escriba wrote:
Hi,
Back on topic of latches and barriers. I was reading up on Java's CyclicBarrier and found its await return value pretty interesting and probably fairly free to implement.
"...each invocation of await() returns the arrival index of that thread at the barrier."
"the arrival index of the current thread, where index getParties() - 1 indicates the first to arrive and zero indicates the last to arrive."
To do this in boost::thread::barrier you would just have to cache m_count right after the decrement and return it instead of the current true/false.
Yes, boost::barrier return true for the last wait. The change would be quite simple, but what would be he utility of such an index?
I have implemented a first prototype of a latch class that could be used also as a barrier. I don't know if the name is the good one then. I have also implemented a completion_latch class that is able to call a function when the counter reaches zero. I'm not satisfied with the current implementation as it needs a lot of synchronization.
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
As reported in another reply I believe as it is misses some features. Immagine indeed the following two scenarios: Thread T1 needs to wait that another thread T2 have executed a certain operation at least N times. In this case (with the precondition of count_down that counter > 0) T2 has to track how many count_down it has executed (knowing then the initial value and there is no way to know given a latch his current value) or issue a "try_wait" before the each count_down and this IMHO seems a bit awkward. Rubbing salt to the wound imagine if T1 has to wait that T2 and T3 have done a certain operation N times (in total). In this way T2 and T3 have to sync at start and communicate each other how many count_down they have executed, the trick of issuing a "try_wait" in this case doesn't work. In my opinion the solution of both use cases is to remove the precondition. Regards Gaetano Mendola
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
Le 22/04/13 10:16, Michael Marcin a écrit :
On 4/21/2013 11:30 PM, Vicente J. Botet Escriba wrote:
Hi,
Back on topic of latches and barriers. I was reading up on Java's CyclicBarrier and found its await return value pretty interesting and probably fairly free to implement.
"...each invocation of await() returns the arrival index of that thread at the barrier."
"the arrival index of the current thread, where index getParties() - 1 indicates the first to arrive and zero indicates the last to arrive."
To do this in boost::thread::barrier you would just have to cache m_count right after the decrement and return it instead of the current true/false.
Yes, boost::barrier return true for the last wait. The change would be quite simple, but what would be he utility of such an index?
I have implemented a first prototype of a latch class that could be used also as a barrier. I don't know if the name is the good one then. I have also implemented a completion_latch class that is able to call a function when the counter reaches zero. I'm not satisfied with the current implementation as it needs a lot of synchronization.
As reported in another reply I believe as it is misses some features.
Immagine indeed the following two scenarios:
Thread T1 needs to wait that another thread T2 have executed a certain operation at least N times. In this case (with the precondition of count_down that counter > 0) T2 has to track how many count_down it has executed (knowing then the initial value and there is no way to know given a latch his current value) or issue a "try_wait" before the each count_down and this IMHO seems a bit awkward. You can built on top of latch a class that has a latch and a specific counter for T2 (the latch is initialized to 1 and the counter to N. The
Le 15/05/13 19:26, Gaetano Mendola a écrit : thread T1 just wait on the latch, and the thread T1 counts down the thread specific counter (without any need to use locks). Once this specific counter is zero the class count_down the latch.
Rubbing salt to the wound imagine if T1 has to wait that T2 and T3 have done a certain operation N times (in total). In this way T2 and T3 have to sync at start and communicate each other how many count_down they have executed, the trick of issuing a "try_wait" in this case doesn't work.
What is the expected behavior of the threads T2 and T3 if they count_down more than N times? IMHO, you should build your own class that takes care of your specific constraints.
In my opinion the solution of both use cases is to remove the precondition.
Could you show how removing the precondition can help to implement the use cases you have presented? Best, Vicente
On 16/05/2013 21.27, Vicente J. Botet Escriba wrote:
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
Le 22/04/13 10:16, Michael Marcin a écrit :
On 4/21/2013 11:30 PM, Vicente J. Botet Escriba wrote:
Hi,
Back on topic of latches and barriers. I was reading up on Java's CyclicBarrier and found its await return value pretty interesting and probably fairly free to implement.
"...each invocation of await() returns the arrival index of that thread at the barrier."
"the arrival index of the current thread, where index getParties() - 1 indicates the first to arrive and zero indicates the last to arrive."
To do this in boost::thread::barrier you would just have to cache m_count right after the decrement and return it instead of the current true/false.
Yes, boost::barrier return true for the last wait. The change would be quite simple, but what would be he utility of such an index?
I have implemented a first prototype of a latch class that could be used also as a barrier. I don't know if the name is the good one then. I have also implemented a completion_latch class that is able to call a function when the counter reaches zero. I'm not satisfied with the current implementation as it needs a lot of synchronization.
As reported in another reply I believe as it is misses some features.
Immagine indeed the following two scenarios:
Thread T1 needs to wait that another thread T2 have executed a certain operation at least N times. In this case (with the precondition of count_down that counter > 0) T2 has to track how many count_down it has executed (knowing then the initial value and there is no way to know given a latch his current value) or issue a "try_wait" before the each count_down and this IMHO seems a bit awkward. You can built on top of latch a class that has a latch and a specific counter for T2 (the latch is initialized to 1 and the counter to N. The
Le 15/05/13 19:26, Gaetano Mendola a écrit : thread T1 just wait on the latch, and the thread T1 counts down the thread specific counter (without any need to use locks). Once this specific counter is zero the class count_down the latch.
Rubbing salt to the wound imagine if T1 has to wait that T2 and T3 have done a certain operation N times (in total). In this way T2 and T3 have to sync at start and communicate each other how many count_down they have executed, the trick of issuing a "try_wait" in this case doesn't work.
What is the expected behavior of the threads T2 and T3 if they count_down more than N times? IMHO, you should build your own class that takes care of your specific constraints.
The expected behaviour is that if you do a count_down on a latch that is zero already the operation doesn't have any effect.
In my opinion the solution of both use cases is to remove the precondition.
Could you show how removing the precondition can help to implement the use cases you have presented?
Simply T2 and T3 can continue to do count_down without keep track if the counter has reached zero or not. Regards Gaetano Mendola
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
Le 22/04/13 10:16, Michael Marcin a écrit :
On 4/21/2013 11:30 PM, Vicente J. Botet Escriba wrote:
Hi,
Back on topic of latches and barriers. I was reading up on Java's CyclicBarrier and found its await return value pretty interesting and probably fairly free to implement.
"...each invocation of await() returns the arrival index of that thread at the barrier."
"the arrival index of the current thread, where index getParties() - 1 indicates the first to arrive and zero indicates the last to arrive."
To do this in boost::thread::barrier you would just have to cache m_count right after the decrement and return it instead of the current true/false.
Yes, boost::barrier return true for the last wait. The change would be quite simple, but what would be he utility of such an index?
I have implemented a first prototype of a latch class that could be used also as a barrier. I don't know if the name is the good one then. I have also implemented a completion_latch class that is able to call a function when the counter reaches zero. I'm not satisfied with the current implementation as it needs a lot of synchronization.
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
What's the plan for this countdown_latch is it candidate to be included in version 1.54 ? Regards Gaetano Mendola
Le 20/05/13 22:09, Gaetano Mendola a écrit :
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
Le 22/04/13 10:16, Michael Marcin a écrit :
On 4/21/2013 11:30 PM, Vicente J. Botet Escriba wrote:
Hi,
Back on topic of latches and barriers. I was reading up on Java's CyclicBarrier and found its await return value pretty interesting and probably fairly free to implement.
"...each invocation of await() returns the arrival index of that thread at the barrier."
"the arrival index of the current thread, where index getParties() - 1 indicates the first to arrive and zero indicates the last to arrive."
To do this in boost::thread::barrier you would just have to cache m_count right after the decrement and return it instead of the current true/false.
Yes, boost::barrier return true for the last wait. The change would be quite simple, but what would be he utility of such an index?
I have implemented a first prototype of a latch class that could be used also as a barrier. I don't know if the name is the good one then. I have also implemented a completion_latch class that is able to call a function when the counter reaches zero. I'm not satisfied with the current implementation as it needs a lot of synchronization.
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
What's the plan for this countdown_latch is it candidate to be included in version 1.54 ?
No, documentation and tests are missing. I hope it will be ready for 1.55. This and other addition will be experimental and its interface will be subject to some changes. E.g. I have not decided yet how to provide cyclic latches. Best, Vicente
On 21/05/2013 07.13, Vicente J. Botet Escriba wrote:
Le 20/05/13 22:09, Gaetano Mendola a écrit :
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
Le 22/04/13 10:16, Michael Marcin a écrit :
On 4/21/2013 11:30 PM, Vicente J. Botet Escriba wrote:
Hi,
Back on topic of latches and barriers. I was reading up on Java's CyclicBarrier and found its await return value pretty interesting and probably fairly free to implement.
"...each invocation of await() returns the arrival index of that thread at the barrier."
"the arrival index of the current thread, where index getParties() - 1 indicates the first to arrive and zero indicates the last to arrive."
To do this in boost::thread::barrier you would just have to cache m_count right after the decrement and return it instead of the current true/false.
Yes, boost::barrier return true for the last wait. The change would be quite simple, but what would be he utility of such an index?
I have implemented a first prototype of a latch class that could be used also as a barrier. I don't know if the name is the good one then. I have also implemented a completion_latch class that is able to call a function when the counter reaches zero. I'm not satisfied with the current implementation as it needs a lot of synchronization.
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
What's the plan for this countdown_latch is it candidate to be included in version 1.54 ?
No, documentation and tests are missing. I hope it will be ready for 1.55. This and other addition will be experimental and its interface will be subject to some changes. E.g. I have not decided yet how to provide cyclic latches.
Given the fact I have my own implementation of a latch, can you please explain what a cyclic latch is, to see if my latch already provide the feature. I will propose my implementation of latch. Regards Gaetano Mendola
Le 25/05/13 14:45, Gaetano Mendola a écrit :
On 21/05/2013 07.13, Vicente J. Botet Escriba wrote:
Le 20/05/13 22:09, Gaetano Mendola a écrit :
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
Le 22/04/13 10:16, Michael Marcin a écrit :
On 4/21/2013 11:30 PM, Vicente J. Botet Escriba wrote:
Hi,
Back on topic of latches and barriers. I was reading up on Java's CyclicBarrier and found its await return value pretty interesting and probably fairly free to implement.
"...each invocation of await() returns the arrival index of that thread at the barrier."
"the arrival index of the current thread, where index getParties() - 1 indicates the first to arrive and zero indicates the last to arrive."
To do this in boost::thread::barrier you would just have to cache m_count right after the decrement and return it instead of the current true/false.
Yes, boost::barrier return true for the last wait. The change would be quite simple, but what would be he utility of such an index?
I have implemented a first prototype of a latch class that could be used also as a barrier. I don't know if the name is the good one then. I have also implemented a completion_latch class that is able to call a function when the counter reaches zero. I'm not satisfied with the current implementation as it needs a lot of synchronization.
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
What's the plan for this countdown_latch is it candidate to be included in version 1.54 ?
No, documentation and tests are missing. I hope it will be ready for 1.55. This and other addition will be experimental and its interface will be subject to some changes. E.g. I have not decided yet how to provide cyclic latches.
Given the fact I have my own implementation of a latch, can you please explain what a cyclic latch is, to see if my latch already provide the feature. I will propose my implementation of latch.
boost::barrier resets the counter once all the threads synchronize, so that the barrier can be reused again. In order to do that the barrier store the initial counter value and reuse it at each cycle. I don't know if I all the latched must support this feature (possibly by a configuration parameter) or if it is better to have a separated cyclic_latch. Best, Vicente
On 25/05/2013 15.26, Vicente J. Botet Escriba wrote:
Le 25/05/13 14:45, Gaetano Mendola a écrit :
On 21/05/2013 07.13, Vicente J. Botet Escriba wrote:
Le 20/05/13 22:09, Gaetano Mendola a écrit :
On 27/04/2013 09.05, Vicente J. Botet Escriba wrote:
Le 22/04/13 10:16, Michael Marcin a écrit :
On 4/21/2013 11:30 PM, Vicente J. Botet Escriba wrote:
Hi,
Back on topic of latches and barriers. I was reading up on Java's CyclicBarrier and found its await return value pretty interesting and probably fairly free to implement.
"...each invocation of await() returns the arrival index of that thread at the barrier."
"the arrival index of the current thread, where index getParties() - 1 indicates the first to arrive and zero indicates the last to arrive."
To do this in boost::thread::barrier you would just have to cache m_count right after the decrement and return it instead of the current true/false.
Yes, boost::barrier return true for the last wait. The change would be quite simple, but what would be he utility of such an index?
I have implemented a first prototype of a latch class that could be used also as a barrier. I don't know if the name is the good one then. I have also implemented a completion_latch class that is able to call a function when the counter reaches zero. I'm not satisfied with the current implementation as it needs a lot of synchronization.
The change set is https://svn.boost.org/trac/boost/changeset/84055. To see the sources, you would need to update the trunk or to take a look at https://svn.boost.org/svn/boost/trunk/boost/thread/
Please let me know what do you think.
What's the plan for this countdown_latch is it candidate to be included in version 1.54 ?
No, documentation and tests are missing. I hope it will be ready for 1.55. This and other addition will be experimental and its interface will be subject to some changes. E.g. I have not decided yet how to provide cyclic latches.
Given the fact I have my own implementation of a latch, can you please explain what a cyclic latch is, to see if my latch already provide the feature. I will propose my implementation of latch.
boost::barrier resets the counter once all the threads synchronize, so that the barrier can be reused again. In order to do that the barrier store the initial counter value and reuse it at each cycle.
I don't know if I all the latched must support this feature (possibly by a configuration parameter) or if it is better to have a separated cyclic_latch.
So what are you proposing is a latch that as soon have reached 0 it will reset to the initial value and permitting a wait to continue without stopping if it arrives after the reset or to block waiting for a reset. My latch doesn't provide such feature, but isn't enough to keep a counter (reset_counter) increased at each reset and decreased at each wait() (before exiting from it). This way the wait shall be a blocking call if the "reset counter" is > 0. I would make two different names: latch and cyclic_latch I'm also to remove the precondition counter > 0 on the latc::count_down. Regards Gaetano Mendola
Le 25/05/13 15:51, Gaetano Mendola a écrit :
On 25/05/2013 15.26, Vicente J. Botet Escriba wrote:
Le 25/05/13 14:45, Gaetano Mendola a écrit :
On 21/05/2013 07.13, Vicente J. Botet Escriba wrote:
Le 20/05/13 22:09, Gaetano Mendola a écrit :
What's the plan for this countdown_latch is it candidate to be included in version 1.54 ?
No, documentation and tests are missing. I hope it will be ready for 1.55. This and other addition will be experimental and its interface will be subject to some changes. E.g. I have not decided yet how to provide cyclic latches.
Given the fact I have my own implementation of a latch, can you please explain what a cyclic latch is, to see if my latch already provide the feature. I will propose my implementation of latch.
boost::barrier resets the counter once all the threads synchronize, so that the barrier can be reused again. In order to do that the barrier store the initial counter value and reuse it at each cycle.
I don't know if I all the latched must support this feature (possibly by a configuration parameter) or if it is better to have a separated cyclic_latch.
So what are you proposing is a latch that as soon have reached 0 it will reset to the initial value and permitting a wait to continue without stopping if it arrives after the reset or to block waiting for a reset. My latch doesn't provide such feature, but isn't enough to keep a counter (reset_counter) increased at each reset and decreased at each wait() (before exiting from it). This way the wait shall be a blocking call if the "reset counter" is > 0.
I would make two different names: latch and cyclic_latch Thanks for your advice. The alternative is to have a bool parameter on the constructor and make use of a "restore counter" when "reset counter" is > 0 which is not too expensive. This needs however to store another integer for the "restore counter". I want to be sure that this doesn't adds any additional constraint to the class.
I'm also to remove the precondition counter > 0 on the latc::count_down.
I don't see yet the case of a thread using countdown several times the same latch as a valid use case. But maybe if you post a concrete (real) example, it could make me change. Best, Vicente
On 5/26/2013 2:53 AM, Vicente J. Botet Escriba wrote:
Le 25/05/13 15:51, Gaetano Mendola a écrit :
On 25/05/2013 15.26, Vicente J. Botet Escriba wrote:
Le 25/05/13 14:45, Gaetano Mendola a écrit :
On 21/05/2013 07.13, Vicente J. Botet Escriba wrote:
Le 20/05/13 22:09, Gaetano Mendola a écrit :
What's the plan for this countdown_latch is it candidate to be included in version 1.54 ?
No, documentation and tests are missing. I hope it will be ready for 1.55. This and other addition will be experimental and its interface will be subject to some changes. E.g. I have not decided yet how to provide cyclic latches.
Given the fact I have my own implementation of a latch, can you please explain what a cyclic latch is, to see if my latch already provide the feature. I will propose my implementation of latch.
boost::barrier resets the counter once all the threads synchronize, so that the barrier can be reused again. In order to do that the barrier store the initial counter value and reuse it at each cycle.
I don't know if I all the latched must support this feature (possibly by a configuration parameter) or if it is better to have a separated cyclic_latch.
So what are you proposing is a latch that as soon have reached 0 it will reset to the initial value and permitting a wait to continue without stopping if it arrives after the reset or to block waiting for a reset. My latch doesn't provide such feature, but isn't enough to keep a counter (reset_counter) increased at each reset and decreased at each wait() (before exiting from it). This way the wait shall be a blocking call if the "reset counter" is > 0.
I would make two different names: latch and cyclic_latch Thanks for your advice. The alternative is to have a bool parameter on the constructor and make use of a "restore counter" when "reset counter" is > 0 which is not too expensive. This needs however to store another integer for the "restore counter". I want to be sure that this doesn't adds any additional constraint to the class.
I'm also to remove the precondition counter > 0 on the latc::count_down.
I don't see yet the case of a thread using countdown several times the same latch as a valid use case. But maybe if you post a concrete (real) example, it could make me change.
What about adding try_count_down like I suggested? http://article.gmane.org/gmane.comp.lib.boost.devel/241554
Le 26/05/13 10:01, Michael Marcin a écrit :
On 5/26/2013 2:53 AM, Vicente J. Botet Escriba wrote:
Le 25/05/13 15:51, Gaetano Mendola a écrit :
I'm also to remove the precondition counter > 0 on the latc::count_down.
I don't see yet the case of a thread using countdown several times the same latch as a valid use case. But maybe if you post a concrete (real) example, it could make me change.
What about adding try_count_down like I suggested?
Yes, this will be the right direction if there is a valid use case. Best, Vicente
On 26/05/2013 09.53, Vicente J. Botet Escriba wrote:
Le 25/05/13 15:51, Gaetano Mendola a écrit :
On 25/05/2013 15.26, Vicente J. Botet Escriba wrote:
Le 25/05/13 14:45, Gaetano Mendola a écrit :
On 21/05/2013 07.13, Vicente J. Botet Escriba wrote:
Le 20/05/13 22:09, Gaetano Mendola a écrit :
What's the plan for this countdown_latch is it candidate to be included in version 1.54 ?
No, documentation and tests are missing. I hope it will be ready for 1.55. This and other addition will be experimental and its interface will be subject to some changes. E.g. I have not decided yet how to provide cyclic latches.
Given the fact I have my own implementation of a latch, can you please explain what a cyclic latch is, to see if my latch already provide the feature. I will propose my implementation of latch.
boost::barrier resets the counter once all the threads synchronize, so that the barrier can be reused again. In order to do that the barrier store the initial counter value and reuse it at each cycle.
I don't know if I all the latched must support this feature (possibly by a configuration parameter) or if it is better to have a separated cyclic_latch.
So what are you proposing is a latch that as soon have reached 0 it will reset to the initial value and permitting a wait to continue without stopping if it arrives after the reset or to block waiting for a reset. My latch doesn't provide such feature, but isn't enough to keep a counter (reset_counter) increased at each reset and decreased at each wait() (before exiting from it). This way the wait shall be a blocking call if the "reset counter" is > 0.
I would make two different names: latch and cyclic_latch Thanks for your advice. The alternative is to have a bool parameter on the constructor and make use of a "restore counter" when "reset counter" is > 0 which is not too expensive. This needs however to store another integer for the "restore counter". I want to be sure that this doesn't adds any additional constraint to the class.
I'm also to remove the precondition counter > 0 on the latc::count_down.
I don't see yet the case of a thread using countdown several times the same latch as a valid use case. But maybe if you post a concrete (real) example, it could make me change.
Do you really need a real example? I don't have a real example but again, having two threads calling latch::count_down on the same latch instance is going to be an headache for the user, immagine a multi producer/single consumer application where the consumer as to start when at least N items are ready. Yes sure they can implement their own Latch taking care of not calling the count_down on the internal latch, for the matter seeing how hard is to implement a latch they can even implement one from scratch. At least throw an exception on an already zeroed latch instead to make the counter to assume a 2^64 − 1 value (making the wait a blocking call again). To the other side can you please tell me why you prefer to keep the precondition breaking the principle of least surprise? Consider that a CountDownLatch is an already concept in Java and the count_down in there is implemented without the precondition. Regards Gaetano Mendola
Le 26/05/13 15:07, Gaetano Mendola a écrit :
On 26/05/2013 09.53, Vicente J. Botet Escriba wrote:
Le 25/05/13 15:51, Gaetano Mendola a écrit :
On 25/05/2013 15.26, Vicente J. Botet Escriba wrote:
Le 25/05/13 14:45, Gaetano Mendola a écrit :
On 21/05/2013 07.13, Vicente J. Botet Escriba wrote:
Le 20/05/13 22:09, Gaetano Mendola a écrit : > > What's the plan for this countdown_latch is it candidate to be > included > in version 1.54 ? > > No, documentation and tests are missing. I hope it will be ready for 1.55. This and other addition will be experimental and its interface will be subject to some changes. E.g. I have not decided yet how to provide cyclic latches.
Given the fact I have my own implementation of a latch, can you please explain what a cyclic latch is, to see if my latch already provide the feature. I will propose my implementation of latch.
boost::barrier resets the counter once all the threads synchronize, so that the barrier can be reused again. In order to do that the barrier store the initial counter value and reuse it at each cycle.
I don't know if I all the latched must support this feature (possibly by a configuration parameter) or if it is better to have a separated cyclic_latch.
So what are you proposing is a latch that as soon have reached 0 it will reset to the initial value and permitting a wait to continue without stopping if it arrives after the reset or to block waiting for a reset. My latch doesn't provide such feature, but isn't enough to keep a counter (reset_counter) increased at each reset and decreased at each wait() (before exiting from it). This way the wait shall be a blocking call if the "reset counter" is > 0.
I would make two different names: latch and cyclic_latch Thanks for your advice. The alternative is to have a bool parameter on the constructor and make use of a "restore counter" when "reset counter" is > 0 which is not too expensive. This needs however to store another integer for the "restore counter". I want to be sure that this doesn't adds any additional constraint to the class.
I'm also to remove the precondition counter > 0 on the latc::count_down.
I don't see yet the case of a thread using countdown several times the same latch as a valid use case. But maybe if you post a concrete (real) example, it could make me change.
Do you really need a real example?
I don't have a real example but again, having two threads calling latch::count_down on the same latch instance is going to be an headache for the user, immagine a multi producer/single consumer application where the consumer as to start when at least N items are ready. I will not use a latch for that. I would build a at least n messages on
Yes. As having a real example we could see if the solution you propose would be used by the user or if it would implement its own class to solve the problem more efficiently. the queue data type and use the usual wait on this queue.
Yes sure they can implement their own Latch taking care of not calling the count_down on the internal latch, for the matter seeing how hard is to implement a latch they can even implement one from scratch.
At least throw an exception on an already zeroed latch instead to make the counter to assume a 2^64 − 1 value (making the wait a blocking call again).
Throwing an exception would need the same kind of check, which I want to avoid.. The single option I think is acceptable is the try_count_down but before adding it I want to have a valid use case and check how this try_count_down would make the application code better.
To the other side can you please tell me why you prefer to keep the precondition breaking the principle of least surprise? Consider that a CountDownLatch is an already concept in Java and the count_down in there is implemented without the precondition.
Java is not the first language on which these kind of mechanism are used. I used them long time ago in C. C++ is not Java. C++ interfaces use to use Require clauses that must be ensured by the user so that the implementation can be as efficient as possible. I have used this terms as their are part of a standard C++ proposal and the proposal has as I would expect this require clause. BTW, I'm open and considering to use wait and notify which are more in line with the C++ current interfaces. Best, Vicente
On 26/05/2013 17.30, Vicente J. Botet Escriba wrote:
Le 26/05/13 15:07, Gaetano Mendola a écrit :
On 26/05/2013 09.53, Vicente J. Botet Escriba wrote:
Le 25/05/13 15:51, Gaetano Mendola a écrit :
On 25/05/2013 15.26, Vicente J. Botet Escriba wrote:
Le 25/05/13 14:45, Gaetano Mendola a écrit :
On 21/05/2013 07.13, Vicente J. Botet Escriba wrote: > Le 20/05/13 22:09, Gaetano Mendola a écrit : >> >> What's the plan for this countdown_latch is it candidate to be >> included >> in version 1.54 ? >> >> > No, documentation and tests are missing. I hope it will be ready for > 1.55. This and other addition will be experimental and its interface > will be subject to some changes. > E.g. I have not decided yet how to provide cyclic latches.
Given the fact I have my own implementation of a latch, can you please explain what a cyclic latch is, to see if my latch already provide the feature. I will propose my implementation of latch.
boost::barrier resets the counter once all the threads synchronize, so that the barrier can be reused again. In order to do that the barrier store the initial counter value and reuse it at each cycle.
I don't know if I all the latched must support this feature (possibly by a configuration parameter) or if it is better to have a separated cyclic_latch.
So what are you proposing is a latch that as soon have reached 0 it will reset to the initial value and permitting a wait to continue without stopping if it arrives after the reset or to block waiting for a reset. My latch doesn't provide such feature, but isn't enough to keep a counter (reset_counter) increased at each reset and decreased at each wait() (before exiting from it). This way the wait shall be a blocking call if the "reset counter" is > 0.
I would make two different names: latch and cyclic_latch Thanks for your advice. The alternative is to have a bool parameter on the constructor and make use of a "restore counter" when "reset counter" is > 0 which is not too expensive. This needs however to store another integer for the "restore counter". I want to be sure that this doesn't adds any additional constraint to the class.
I'm also to remove the precondition counter > 0 on the latc::count_down.
I don't see yet the case of a thread using countdown several times the same latch as a valid use case. But maybe if you post a concrete (real) example, it could make me change.
Do you really need a real example?
I don't have a real example but again, having two threads calling latch::count_down on the same latch instance is going to be an headache for the user, immagine a multi producer/single consumer application where the consumer as to start when at least N items are ready. I will not use a latch for that. I would build a at least n messages on
Yes. As having a real example we could see if the solution you propose would be used by the user or if it would implement its own class to solve the problem more efficiently. the queue data type and use the usual wait on this queue.
If the buffer is a general purpose buffer the wait on it will return as soon someone puts stuffs on it. As you know there are multiple way to achieve the same even without the usage of latch. I'm fine with leaving the precondition after all people have to read the manual, I have also see you have added a BOOST_ASSERT( count_ > 0). About the cyclic_latch instead of the bool to drive the counter reset I would template the latch using two different policies for reset or not, then I would expose a typedef of it, something like this: template <class COUNTER_POLICY> class Latch { bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_ > 0) { BOOST_ASSERT(count_ > 0); if (--count_ == 0) { counter_policy_.reset(count_); ++generation_; lk.unlock(); cond_.notify_all(); return true; } return false; } private: COUNTER_POLICY counter_policy_; }; typedef Latch<NoResetPolicy> latch; typedef Latch<ResetPolicy> cyclic_latch; Regards Gaetano Mendola
Le 26/05/13 19:20, Gaetano Mendola a écrit :
On 26/05/2013 17.30, Vicente J. Botet Escriba wrote:
Le 26/05/13 15:07, Gaetano Mendola a écrit :
On 26/05/2013 09.53, Vicente J. Botet Escriba wrote:
Le 25/05/13 15:51, Gaetano Mendola a écrit :
On 25/05/2013 15.26, Vicente J. Botet Escriba wrote:
Le 25/05/13 14:45, Gaetano Mendola a écrit : > On 21/05/2013 07.13, Vicente J. Botet Escriba wrote: >> Le 20/05/13 22:09, Gaetano Mendola a écrit : >>> >>> What's the plan for this countdown_latch is it candidate to be >>> included >>> in version 1.54 ? >>> >>> >> No, documentation and tests are missing. I hope it will be >> ready for >> 1.55. This and other addition will be experimental and its >> interface >> will be subject to some changes. >> E.g. I have not decided yet how to provide cyclic latches. > > Given the fact I have my own implementation of a latch, can you > please > explain what a cyclic latch is, to see if my latch already provide > the feature. I will propose my implementation of latch. > > boost::barrier resets the counter once all the threads synchronize, so that the barrier can be reused again. In order to do that the barrier store the initial counter value and reuse it at each cycle.
I don't know if I all the latched must support this feature (possibly by a configuration parameter) or if it is better to have a separated cyclic_latch.
So what are you proposing is a latch that as soon have reached 0 it will reset to the initial value and permitting a wait to continue without stopping if it arrives after the reset or to block waiting for a reset. My latch doesn't provide such feature, but isn't enough to keep a counter (reset_counter) increased at each reset and decreased at each wait() (before exiting from it). This way the wait shall be a blocking call if the "reset counter" is > 0.
I would make two different names: latch and cyclic_latch Thanks for your advice. The alternative is to have a bool parameter on the constructor and make use of a "restore counter" when "reset counter" is > 0 which is not too expensive. This needs however to store another integer for the "restore counter". I want to be sure that this doesn't adds any additional constraint to the class.
I'm also to remove the precondition counter > 0 on the latc::count_down.
I don't see yet the case of a thread using countdown several times the same latch as a valid use case. But maybe if you post a concrete (real) example, it could make me change.
Do you really need a real example?
I don't have a real example but again, having two threads calling latch::count_down on the same latch instance is going to be an headache for the user, immagine a multi producer/single consumer application where the consumer as to start when at least N items are ready. I will not use a latch for that. I would build a at least n messages on
Yes. As having a real example we could see if the solution you propose would be used by the user or if it would implement its own class to solve the problem more efficiently. the queue data type and use the usual wait on this queue.
If the buffer is a general purpose buffer the wait on it will return as soon someone puts stuffs on it. As you know there are multiple way to achieve the same even without the usage of latch.
I'm fine with leaving the precondition after all people have to read the manual, I have also see you have added a BOOST_ASSERT( count_ > 0). Great.
About the cyclic_latch instead of the bool to drive the counter reset I would template the latch using two different policies for reset or not, then I would expose a typedef of it, something like this:
template <class COUNTER_POLICY> class Latch {
bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_ > 0) { BOOST_ASSERT(count_ > 0); if (--count_ == 0) { counter_policy_.reset(count_); ++generation_; lk.unlock(); cond_.notify_all(); return true; } return false; }
private: COUNTER_POLICY counter_policy_; };
typedef Latch<NoResetPolicy> latch; typedef Latch<ResetPolicy> cyclic_latch;
Would the base Latch class be public? if not, this is an implementation detail and is equivalent to having two classes but sharing a common implementation. Shouldn't the base latch make use of EBO? Best, Vicente
On 26/05/2013 22.01, Vicente J. Botet Escriba wrote:
Le 26/05/13 19:20, Gaetano Mendola a écrit :
On 26/05/2013 17.30, Vicente J. Botet Escriba wrote:
Le 26/05/13 15:07, Gaetano Mendola a écrit :
On 26/05/2013 09.53, Vicente J. Botet Escriba wrote:
Le 25/05/13 15:51, Gaetano Mendola a écrit :
On 25/05/2013 15.26, Vicente J. Botet Escriba wrote: > Le 25/05/13 14:45, Gaetano Mendola a écrit : >> On 21/05/2013 07.13, Vicente J. Botet Escriba wrote: >>> Le 20/05/13 22:09, Gaetano Mendola a écrit : >>>> >>>> What's the plan for this countdown_latch is it candidate to be >>>> included >>>> in version 1.54 ? >>>> >>>> >>> No, documentation and tests are missing. I hope it will be >>> ready for >>> 1.55. This and other addition will be experimental and its >>> interface >>> will be subject to some changes. >>> E.g. I have not decided yet how to provide cyclic latches. >> >> Given the fact I have my own implementation of a latch, can you >> please >> explain what a cyclic latch is, to see if my latch already provide >> the feature. I will propose my implementation of latch. >> >> > boost::barrier resets the counter once all the threads > synchronize, so > that the barrier can be reused again. In order to do that the > barrier > store the initial counter value and reuse it at each cycle. > > I don't know if I all the latched must support this feature > (possibly by > a configuration parameter) or if it is better to have a separated > cyclic_latch.
So what are you proposing is a latch that as soon have reached 0 it will reset to the initial value and permitting a wait to continue without stopping if it arrives after the reset or to block waiting for a reset. My latch doesn't provide such feature, but isn't enough to keep a counter (reset_counter) increased at each reset and decreased at each wait() (before exiting from it). This way the wait shall be a blocking call if the "reset counter" is > 0.
I would make two different names: latch and cyclic_latch Thanks for your advice. The alternative is to have a bool parameter on the constructor and make use of a "restore counter" when "reset counter" is > 0 which is not too expensive. This needs however to store another integer for the "restore counter". I want to be sure that this doesn't adds any additional constraint to the class.
I'm also to remove the precondition counter > 0 on the latc::count_down.
I don't see yet the case of a thread using countdown several times the same latch as a valid use case. But maybe if you post a concrete (real) example, it could make me change.
Do you really need a real example?
I don't have a real example but again, having two threads calling latch::count_down on the same latch instance is going to be an headache for the user, immagine a multi producer/single consumer application where the consumer as to start when at least N items are ready. I will not use a latch for that. I would build a at least n messages on
Yes. As having a real example we could see if the solution you propose would be used by the user or if it would implement its own class to solve the problem more efficiently. the queue data type and use the usual wait on this queue.
If the buffer is a general purpose buffer the wait on it will return as soon someone puts stuffs on it. As you know there are multiple way to achieve the same even without the usage of latch.
I'm fine with leaving the precondition after all people have to read the manual, I have also see you have added a BOOST_ASSERT( count_ > 0). Great.
About the cyclic_latch instead of the bool to drive the counter reset I would template the latch using two different policies for reset or not, then I would expose a typedef of it, something like this:
template <class COUNTER_POLICY> class Latch {
bool count_down(unique_lock<mutex> &lk) /// pre_condition (count_ > 0) { BOOST_ASSERT(count_ > 0); if (--count_ == 0) { counter_policy_.reset(count_); ++generation_; lk.unlock(); cond_.notify_all(); return true; } return false; }
private: COUNTER_POLICY counter_policy_; };
typedef Latch<NoResetPolicy> latch; typedef Latch<ResetPolicy> cyclic_latch;
Would the base Latch class be public? if not, this is an implementation detail and is equivalent to having two classes but sharing a common implementation.
Well, not equivalent because with let say a boolean as parameter you need to check at runtime what to do (reset the counter or not) this way with a policy the choose is made a compile time, also implementing it with a boolean will force you to store anyway the original counter value, NoResetPolicy to the other side will be an empty class.
Shouldn't the base latch make use of EBO?
Interesting indeed this should even save that 1 byte in case of NoResetPolicy. Regards Gaetano Mendola
Le 26/05/13 22:45, Gaetano Mendola a écrit :
On 26/05/2013 22.01, Vicente J. Botet Escriba wrote:
Would the base Latch class be public? if not, this is an implementation detail and is equivalent to having two classes but sharing a common implementation.
Well, not equivalent because with let say a boolean as parameter you need to check at runtime what to do (reset the counter or not) this way with a policy the choose is made a compile time, also implementing it with a boolean will force you to store anyway the original counter value, NoResetPolicy to the other side will be an empty class. I said "equivalent to having two classes". Best, Vicente
On 05/26/2013 11:53 PM, Vicente J. Botet Escriba wrote:
Le 26/05/13 22:45, Gaetano Mendola a écrit :
On 26/05/2013 22.01, Vicente J. Botet Escriba wrote:
Would the base Latch class be public? if not, this is an implementation detail and is equivalent to having two classes but sharing a common implementation.
Well, not equivalent because with let say a boolean as parameter you need to check at runtime what to do (reset the counter or not) this way with a policy the choose is made a compile time, also implementing it with a boolean will force you to store anyway the original counter value, NoResetPolicy to the other side will be an empty class. I said "equivalent to having two classes".
And it's even a vantage due the fact they share the implementation. Regards Gaetano Mendola
Le 27/05/13 11:21, Gaetano Mendola a écrit :
On 05/26/2013 11:53 PM, Vicente J. Botet Escriba wrote:
Le 26/05/13 22:45, Gaetano Mendola a écrit :
On 26/05/2013 22.01, Vicente J. Botet Escriba wrote:
Would the base Latch class be public? if not, this is an implementation detail and is equivalent to having two classes but sharing a common implementation. Well, not equivalent because with let say a boolean as parameter you need to check at runtime what to do (reset the counter or not) this way with a policy the choose is made a compile time, also implementing it with a boolean will force you to store anyway the original counter value, NoResetPolicy to the other side will be an empty class. I said "equivalent to having two classes". And it's even a vantage due the fact they share the implementation.
I should be more precise "equivalent for the user". Vicente
On 22/04/2013 06.30, Vicente J. Botet Escriba wrote:
I don't know your constraints: Why do you need to run the setup on the new created thread? Could it be executed directly on the widget constructor?
I have the same pattern in my framework and I solved it with a simple barrier (however I incour in an extra syncronization not needed, indeed the created thread body has to wait has well that the thread creator reaches the barrier. In my very case the setup *has* to be run by the thread because in my case the setup does: 1) cpu affinity 2) thread priority change Regards Gaetano Mendola
On Apr 21, 2013, at 4:27 AM, Michael Marcin
I recently ran across the need to spawn a thread and wait for it to finish its setup before continuing.
The accepted answer seems to be using a mutex and condition variable to achieve this.
That works, and is the appropriate mechanism using the available tools in Boost.Thread.
However that work clutters up the code quite a bit with the implementation details.
Agreed
I came across Java's CountDownLatch which does basically the same work but bundles it up into a tidy package.
It seems to be a fairly trivial but useful abstraction.
implementation: http://codepad.org/E8kd2Eb8
usage:
class widget { public: widget() : latch( 1 ) , thread_( [&]{ thread_func(); } ) { latch.wait(); }
private: void setup(); void run();
void thread_func() { setup(); latch.count_down(); run(); }
countdown_latch latch; std::thread thread_; };
That sounds like a barrier: http://pubs.opengroup.org/onlinepubs/009695299/functions/pthread_barrier_wai... I'd prefer to create a barrier class and, in your example, it would release waiting threads when two are blocked behind it. IOW, you'd create a barrier for two threads and both thread_proc() and the constructor would wait() on the barrier. Once both threads have called wait(), they are both released. (I plan to present that at C++ Now, this year.) Your idea is sound, but I'd prefer following the pthreads naming to that of Java. ___ Rob (Sent from my portable computation engine)
On 21/04/2013 12.54, Rob Stewart wrote:
I'd prefer to create a barrier class and, in your example, it would release waiting threads when two are blocked behind it. IOW, you'd create a barrier for two threads and both thread_proc() and the constructor would wait() on the barrier. Once both threads have called wait(), they are both released. (I plan to present that at C++ Now, this year.)
As said that's create a not needed "wait" on the thread body, what the OP (and me for the matter) needs is that *only* thread creator is blocked waiting for the threads to arrive at certain point of the execution. Regards Gaetano Mendola
On May 11, 2013, at 5:29 PM, Gaetano Mendola
On 21/04/2013 12.54, Rob Stewart wrote:
I'd prefer to create a barrier class and, in your example, it would release waiting threads when two are blocked behind it. IOW, you'd create a barrier for two threads and both thread_proc() and the constructor would wait() on the barrier. Once both threads have called wait(), they are both released. (I plan to present that at C++ Now, this year.)
As said that's create a not needed "wait" on the thread body, what the OP (and me for the matter) needs is that *only* thread creator is blocked waiting for the threads to arrive at certain point of the execution.
It seems to me that wait is inconsequential relative to the cost of creating a thread. If this were part of the thread creation process, an option to thread's constructor, say, there would be some convenience, but the performance difference doesn't seem worthwhile. Have I missed something? ___ Rob (Sent from my portable computation engine)
On 5/12/2013 6:49 AM, Rob Stewart wrote:
On May 11, 2013, at 5:29 PM, Gaetano Mendola
wrote: On 21/04/2013 12.54, Rob Stewart wrote:
I'd prefer to create a barrier class and, in your example, it would release waiting threads when two are blocked behind it. IOW, you'd create a barrier for two threads and both thread_proc() and the constructor would wait() on the barrier. Once both threads have called wait(), they are both released. (I plan to present that at C++ Now, this year.)
As said that's create a not needed "wait" on the thread body, what the OP (and me for the matter) needs is that *only* thread creator is blocked waiting for the threads to arrive at certain point of the execution.
It seems to me that wait is inconsequential relative to the cost of creating a thread. If this were part of the thread creation process, an option to thread's constructor, say, there would be some convenience, but the performance difference doesn't seem worthwhile. Have I missed something?
The created thread is already executing when it gets to the latch. Why would you want to introduce synchronization, and potentially block the thread, where none is needed?
On May 12, 2013, at 6:45 PM, Michael Marcin
On 5/12/2013 6:49 AM, Rob Stewart wrote:
On May 11, 2013, at 5:29 PM, Gaetano Mendola
wrote: On 21/04/2013 12.54, Rob Stewart wrote:
I'd prefer to create a barrier class and, in your example, it would release waiting threads when two are blocked behind it. IOW, you'd create a barrier for two threads and both thread_proc() and the constructor would wait() on the barrier. Once both threads have called wait(), they are both released. (I plan to present that at C++ Now, this year.)
As said that's create a not needed "wait" on the thread body, what the OP (and me for the matter) needs is that *only* thread creator is blocked waiting for the threads to arrive at certain point of the execution.
It seems to me that wait is inconsequential relative to the cost of creating a thread. If this were part of the thread creation process, an option to thread's constructor, say, there would be some convenience, but the performance difference doesn't seem worthwhile. Have I missed something?
The created thread is already executing when it gets to the latch. Why would you want to introduce synchronization, and potentially block the thread, where none is needed?
Is the latch to just cause the creator to block until the created thread begins, or is it more general purpose to cause a number of threads to wait until they are all ready? I thought it was the latter and, if so, how can it be done without synchronization? I must still be missing something. ___ Rob (Sent from my portable computation engine)
On 13/05/2013 02.46, Rob Stewart wrote:
On May 12, 2013, at 6:45 PM, Michael Marcin
wrote: On 5/12/2013 6:49 AM, Rob Stewart wrote:
On May 11, 2013, at 5:29 PM, Gaetano Mendola
wrote: On 21/04/2013 12.54, Rob Stewart wrote:
I'd prefer to create a barrier class and, in your example, it would release waiting threads when two are blocked behind it. IOW, you'd create a barrier for two threads and both thread_proc() and the constructor would wait() on the barrier. Once both threads have called wait(), they are both released. (I plan to present that at C++ Now, this year.)
As said that's create a not needed "wait" on the thread body, what the OP (and me for the matter) needs is that *only* thread creator is blocked waiting for the threads to arrive at certain point of the execution.
It seems to me that wait is inconsequential relative to the cost of creating a thread. If this were part of the thread creation process, an option to thread's constructor, say, there would be some convenience, but the performance difference doesn't seem worthwhile. Have I missed something?
The created thread is already executing when it gets to the latch. Why would you want to introduce synchronization, and potentially block the thread, where none is needed?
Is the latch to just cause the creator to block until the created thread begins, or is it more general purpose to cause a number of threads to wait until they are all ready? I thought it was the latter and, if so, how can it be done without synchronization? I must still be missing something.
The latch is to just cause the creator to block until the created thread begins. At the moment with boost off-the-shelf you can achieve it using a barrier but doing so you are blocking the thread as well, what me and the OP needs is that the thread creator will eventualy block waiting for the thread not the other way around. You can achieve it using a syncronization mechanims named "latch": boost::latch myLatch(1); myLatch.count_down(); /// This is not a blocking operation myLatch.wait(); /// This is blocking if a count_down() was not issued of course latch is implemented with condition. Regards Gateno Mendiola
On 5/13/13 2:19 PM, Gaetano Mendola wrote:
On 13/05/2013 02.46, Rob Stewart wrote:
On May 12, 2013, at 6:45 PM, Michael Marcin
wrote: On 5/12/2013 6:49 AM, Rob Stewart wrote:
On May 11, 2013, at 5:29 PM, Gaetano Mendola
wrote: On 21/04/2013 12.54, Rob Stewart wrote:
I'd prefer to create a barrier class and, in your example, it would release waiting threads when two are blocked behind it. IOW, you'd create a barrier for two threads and both thread_proc() and the constructor would wait() on the barrier. Once both threads have called wait(), they are both released. (I plan to present that at C++ Now, this year.)
As said that's create a not needed "wait" on the thread body, what the OP (and me for the matter) needs is that *only* thread creator is blocked waiting for the threads to arrive at certain point of the execution.
It seems to me that wait is inconsequential relative to the cost of creating a thread. If this were part of the thread creation process, an option to thread's constructor, say, there would be some convenience, but the performance difference doesn't seem worthwhile. Have I missed something?
The created thread is already executing when it gets to the latch. Why would you want to introduce synchronization, and potentially block the thread, where none is needed?
Is the latch to just cause the creator to block until the created thread begins, or is it more general purpose to cause a number of threads to wait until they are all ready? I thought it was the latter and, if so, how can it be done without synchronization? I must still be missing something.
The latch is to just cause the creator to block until the created thread begins. At the moment with boost off-the-shelf you can achieve it using a barrier but doing so you are blocking the thread as well, what me and the OP needs is that the thread creator will eventualy block waiting for the thread not the other way around. You can achieve it using a syncronization mechanims named "latch":
boost::latch myLatch(1);
myLatch.count_down(); /// This is not a blocking operation
myLatch.wait(); /// This is blocking if a count_down() was not issued
of course latch is implemented with condition.
More generally you can use the latch to block N threads until N events occur. But the threads signaling the events don't have to block. From the Java docs: A CountDownLatch is a versatile synchronization tool and can be used for a number of purposes. A CountDownLatch initialized with a count of one serves as a simple on/off latch, or gate: all threads invoking await wait at the gate until it is opened by a thread invoking countDown(). A CountDownLatch initialized to N can be used to make one thread wait until N threads have completed some action, or some action has been completed N times. A useful property of a CountDownLatch is that it doesn't require that threads calling countDown wait for the count to reach zero before proceeding, it simply prevents any thread from proceeding past an await until all threads could pass.
On May 13, 2013, at 2:27 PM, Michael Marcin
On 5/13/13 2:19 PM, Gaetano Mendola wrote:
On 13/05/2013 02.46, Rob Stewart wrote:
On May 12, 2013, at 6:45 PM, Michael Marcin
wrote: On 5/12/2013 6:49 AM, Rob Stewart wrote:
On May 11, 2013, at 5:29 PM, Gaetano Mendola
wrote: On 21/04/2013 12.54, Rob Stewart wrote: > IOW, you'd create a barrier for two threads and both thread_proc() and the constructor would wait() on the barrier. Once both threads > have called wait(), they are both released.
As said that's create a not needed "wait" on the thread body, what the OP (and me for the matter) needs is that *only* thread creator is blocked waiting for the threads to arrive at certain point of the execution.
It seems to me that wait is inconsequential relative to the cost of creating a thread. If this were part of the thread creation process, an option to thread's constructor, say, there would be some convenience, but the performance difference doesn't seem worthwhile. Have I missed something?
The created thread is already executing when it gets to the latch. Why would you want to introduce synchronization, and potentially block the thread, where none is needed?
Having only the one tool is easier to understand, and given that the wait would be very short, relative to the high cost of creating a thread, I don't know that the performance gain is worthwhile.
Is the latch to just cause the creator to block until the created thread begins, or is it more general purpose to cause a number of threads to wait until they are all ready?
The latch is to just cause the creator to block until the created thread begins. At the moment with boost off-the-shelf you can achieve it using a barrier but doing so you are blocking the thread as well, what me and the OP needs is that the thread creator will eventualy block waiting for the thread not the other way around. You can achieve it using a syncronization mechanims named "latch":
Thus, a new tool for a little optimization. Is the benefit provably useful?
boost::latch myLatch(1);
myLatch.count_down(); /// This is not a blocking operation
myLatch.wait(); /// This is blocking if a count_down() was not issued
of course latch is implemented with condition.
For that purpose, the names, and even the approach, seem wrong. Ideally, I'd expect a constructor argument to boost::thread to control this. Then, a wrapper function can coordinate with the constructor to release it when the wrapper runs, just before it invokes the user's callable. Lacking that, a class named "gate", with "wait" and "open" member functions, would be more readable.
More generally you can use the latch to block N threads until N events occur. But the threads signaling the events don't have to block.
OK. The gate usage, above, is a degenerate case of this. Still, the names and approach or not ideal.
From the Java docs:
A CountDownLatch is a versatile synchronization tool and can be used for a number of purposes. A CountDownLatch initialized with a count of one serves as a simple on/off latch, or gate: all threads invoking await wait at the gate until it is opened by a thread invoking countDown(). A CountDownLatch initialized to N can be used to make one thread wait until N threads have completed some action, or some action has been completed N times.
A useful property of a CountDownLatch is that it doesn't require that threads calling countDown wait for the count to reach zero before proceeding, it simply prevents any thread from proceeding past an await until all threads could pass.
Thank you for that. I don't like the "count down" part of the name, at least of the release operation, and while "await" is fine, it isn't helpful to use a new verb. Why not parallel barrier and condition_variable, and use "wait" and "notify"? ___ Rob (Sent from my portable computation engine)
On 14/05/2013 13.09, Rob Stewart wrote:
On May 13, 2013, at 2:27 PM, Michael Marcin
wrote: On 5/13/13 2:19 PM, Gaetano Mendola wrote:
On 13/05/2013 02.46, Rob Stewart wrote:
On May 12, 2013, at 6:45 PM, Michael Marcin
wrote: On 5/12/2013 6:49 AM, Rob Stewart wrote:
On May 11, 2013, at 5:29 PM, Gaetano Mendola
wrote: > On 21/04/2013 12.54, Rob Stewart wrote: >> IOW, you'd create a barrier for two threads and both thread_proc() and the constructor would wait() on the barrier. Once both threads >> have called wait(), they are both released. > > As said that's create a not needed "wait" on the thread body, what the OP (and me for the matter) needs is that *only* thread creator is blocked waiting for the threads to arrive at certain point of the execution.
It seems to me that wait is inconsequential relative to the cost of creating a thread. If this were part of the thread creation process, an option to thread's constructor, say, there would be some convenience, but the performance difference doesn't seem worthwhile. Have I missed something?
The created thread is already executing when it gets to the latch. Why would you want to introduce synchronization, and potentially block the thread, where none is needed?
Having only the one tool is easier to understand, and given that the wait would be very short, relative to the high cost of creating a thread, I don't know that the performance gain is worthwhile.
Is the latch to just cause the creator to block until the created thread begins, or is it more general purpose to cause a number of threads to wait until they are all ready?
The latch is to just cause the creator to block until the created thread begins. At the moment with boost off-the-shelf you can achieve it using a barrier but doing so you are blocking the thread as well, what me and the OP needs is that the thread creator will eventualy block waiting for the thread not the other way around. You can achieve it using a syncronization mechanims named "latch":
Thus, a new tool for a little optimization. Is the benefit provably useful?
boost::latch myLatch(1);
myLatch.count_down(); /// This is not a blocking operation
myLatch.wait(); /// This is blocking if a count_down() was not issued
of course latch is implemented with condition.
For that purpose, the names, and even the approach, seem wrong. Ideally, I'd expect a constructor argument to boost::thread to control this. Then, a wrapper function can coordinate with the constructor to release it when the wrapper runs, just before it invokes the user's callable.
Lacking that, a class named "gate", with "wait" and "open" member functions, would be more readable.
More generally you can use the latch to block N threads until N events occur. But the threads signaling the events don't have to block.
OK. The gate usage, above, is a degenerate case of this. Still, the names and approach or not ideal.
From the Java docs:
A CountDownLatch is a versatile synchronization tool and can be used for a number of purposes. A CountDownLatch initialized with a count of one serves as a simple on/off latch, or gate: all threads invoking await wait at the gate until it is opened by a thread invoking countDown(). A CountDownLatch initialized to N can be used to make one thread wait until N threads have completed some action, or some action has been completed N times.
A useful property of a CountDownLatch is that it doesn't require that threads calling countDown wait for the count to reach zero before proceeding, it simply prevents any thread from proceeding past an await until all threads could pass.
Thank you for that. I don't like the "count down" part of the name, at least of the release operation, and while "await" is fine, it isn't helpful to use a new verb. Why not parallel barrier and condition_variable, and use "wait" and "notify"?
Given it has a clearly inspiration on Java one I would leave it like it it now with the excpetion to permit count_down even if the counter is zero already. Regards Gaetano Mendola
On 5/14/13 2:21 PM, Gaetano Mendola wrote:
Given it has a clearly inspiration on Java one I would leave it like it it now with the excpetion to permit count_down even if the counter is zero already.
I've seen that the Java one supports that, but most c++ implementations seem to not. What is the purpose of allowing count_down when the counter is already zero? What useful functionality does it enable?
On 14/05/2013 22.11, Michael Marcin wrote:
On 5/14/13 2:21 PM, Gaetano Mendola wrote:
Given it has a clearly inspiration on Java one I would leave it like it it now with the excpetion to permit count_down even if the counter is zero already.
I've seen that the Java one supports that, but most c++ implementations seem to not.
What is the purpose of allowing count_down when the counter is already zero? What useful functionality does it enable?
It will permit a thread T1 to proceed if another thread T2 have executed a certain operation at least N times, as it is implemented now (with the precondition of count_down that counter > 0) T2 has to track how many count_down it has executed (knowing then the initial value) or issue a "try_wait" before the count_down and this IMHO seems a bit awkward. Rubbing salt to the wound imagine if T1 has to wait that T2 and T3 have done a certain operation N times (in total), that way T2 and T3 have to sync at start and communicate each other how many count_down they have executed, the trick of issuing a "try_wait" in this case doesn't work. Regards Gaetano Mendola
On 5/15/2013 12:10 PM, Gaetano Mendola wrote:
On 14/05/2013 22.11, Michael Marcin wrote:
On 5/14/13 2:21 PM, Gaetano Mendola wrote:
Given it has a clearly inspiration on Java one I would leave it like it it now with the excpetion to permit count_down even if the counter is zero already.
I've seen that the Java one supports that, but most c++ implementations seem to not.
What is the purpose of allowing count_down when the counter is already zero? What useful functionality does it enable?
It will permit a thread T1 to proceed if another thread T2 have executed a certain operation at least N times, as it is implemented now (with the precondition of count_down that counter > 0) T2 has to track how many count_down it has executed (knowing then the initial value) or issue a "try_wait" before the count_down and this IMHO seems a bit awkward.
You say it will permit T1 to proceed, this means T1 is calling wait, right? Wait does not have the precondition that counter > 0. Oh I see, you're saying T2 which does call count_down has to know how many count_downs it is safe to do? The more common use case I figured was N threads all calling count_down once. I suppose this use case is just as valid. Still it seems error prone to me to have this be the default. You could easily have a situation where you spawn 5 threads to do work and want to wait for them all the be initialized before any starts doing work, then a later maintainer comes along and adds a 6th worker thread without updating the latch count. Now the latch would reach zero and the threads would start doing their work before the 6th is necessarily initialized and could lead to hard to trackdown bugs. Perhaps you could add a try_count_down method which removes doesn't have the counter > 0 precondition?
On May 14, 2013, at 1:21 PM, Gaetano Mendola
On 14/05/2013 13.09, Rob Stewart wrote:
Thank you for that. I don't like the "count down" part of the name, at least of the release operation, and while "await" is fine, it isn't helpful to use a new verb. Why not parallel barrier and condition_variable, and use "wait" and "notify"?
Given it has a clearly inspiration on Java one I would leave it like it it now with the excpetion to permit count_down even if the counter is zero already.
If the component is useful, the language in which it originates should have some influence on naming for the rest, but if the names are awkward, or don't fit well in the new language, changes are warranted. (There are enough inconsistencies on the standard library; we don't need more just to match expectations of those using another language.) I especially don't think C++ should be modeled on Java. ___ Rob (Sent from my portable computation engine)
participants (4)
-
Gaetano Mendola
-
Michael Marcin
-
Rob Stewart
-
Vicente J. Botet Escriba