[multi state machine/ MSM] execute_queued_events
In the MSM doc section "Enqueueing events for later processing", you state that "Calling execute_queued_events() will then process all enqueued events (in FIFO order)." You also said in a previous post that MSM is not thread safe, thus requiring an external mechanism to insure thread safety. I don't see a mechanism to execute a single event at a time, which would be preferable if I have to lock the queue while processing events. Having said that, a cursory examination of the "execute_queued_events" code makes me think it only processes one event per call. Which is correct? Is the documentation correct that it actually processes all queued events? If not, can the documentation be corrected? If so, is there a publicly usable call to process a single queued event? If there is no publicly available call for processing a single queued event, how do you recommend processing queued events in a thread safe manner that won't lock the queue for extended periods of time?
In the MSM doc section "Enqueueing events for later processing", you state that "Calling execute_queued_events() will then process all enqueued events (in FIFO order)."
You also said in a previous post that MSM is not thread safe, thus requiring an external mechanism to insure thread safety.
True.
I don't see a mechanism to execute a single event at a time, which would be preferable if I have to lock the queue while processing events. Having said that, a cursory examination of the "execute_queued_events" code makes me think it only processes one event per call.
Looks like the implementation is wrong, I'll fix this and provide a second member for single event processing. May I ask what is your use case? Do you have a thread enqueuing events and another executing of several threads doing both? I ask because I'm working on C++11 MSMv3 (msm3 branch on github) and I have an unfinished but promising implementation of a lockfree msm which is much faster than locking. It works only on simple fsms at the moment (no submachine and no pseudo entry/exit). It works this way: all threads process events, call guards and actions concurrently. In case a thread notices at the time of switching the current state (which is only a check of an atomic), it will roll back and retry. It's a bit advanced fsm building but if you feel like trying, there are some tests showing how to do and I'm ready to help. I ask your use case because it works best when a thread processes most of the events and one only from time to time. HTH, Christophe
May I ask what is your use case? Do you have a thread enqueuing events and another executing of several threads doing both?
[*] I haven't started using queued events yet, but if I do it will be with multiple threads enqueuing events. In the long run, I would probably have just a single thread processing the event queue (since queuing theory says that is always the best approach). However, there may be a transition period switching from process_event to queued events where the (multiple) threads queuing the events are also processing them.
I ask because I'm working on C++11 MSMv3 (msm3 branch on github) and I have an unfinished but promising implementation of a lockfree msm which is much faster than locking. It works only on simple fsms at the moment (no submachine and no pseudo entry/exit). It works this way: all threads process events, call guards and actions concurrently. In case a thread notices at the time of switching the current state (which is only a check of an atomic), it will roll back and retry. It's a bit advanced fsm building but if you feel like trying, there are some tests showing how to do and I'm ready to help. I ask your use case because it works best when a thread processes most of the events and one only from time to time.
[*] My case would be where most of the events are queued by a 'main' thread, with only occasional event queuing from other threads. So it might be appropriate. What's the URL? A brief search on Github didn't get me to the right repository.
HTH, Christophe
--- Steve H.
Hi, sorry for hijacking this thread, but I do have a very similar use case. I modeled the states of a simple network communication (implemented using boost::asio) using boost::msm. The states are: *Disconnected -> Connecting -> Idle -> Sending Message -> Waiting for Response -> Idle* I have multiple threads, which use this network communication to send messages. While the state "Waiting for Response" is active, I cannot send any further message. So when a thread wants to send a message, I enqueue this in a separate queue and I read this queue only when entering state "Idle". The reason for the separate queue is that the transition "Waiting for Response -> Idle" can only be triggered through the boost::asio read handler. Maybe this can be done easier with your new implementation? Thanks, Manuel 2014-09-23 23:55 GMT+02:00 Hickman, Steve (AdvTech) < Steve.Hickman@honeywell.com>:
May I ask what is your use case? Do you have a thread enqueuing events and another executing of several threads doing both?
[*] I haven't started using queued events yet, but if I do it will be with multiple threads enqueuing events. In the long run, I would probably have just a single thread processing the event queue (since queuing theory says that is always the best approach). However, there may be a transition period switching from process_event to queued events where the (multiple) threads queuing the events are also processing them.
I ask because I'm working on C++11 MSMv3 (msm3 branch on github) and I have an unfinished but promising implementation of a lockfree msm which is much faster than locking. It works only on simple fsms at the moment (no submachine and no pseudo entry/exit). It works this way: all threads process events, call guards and actions concurrently. In case a thread notices at the time of switching the current state (which is only a check of an atomic), it will roll back and retry. It's a bit advanced fsm building but if you feel like trying, there are some tests showing how to do and I'm ready to help. I ask your use case because it works best when a thread processes most of the events and one only from time to time.
[*] My case would be where most of the events are queued by a 'main' thread, with only occasional event queuing from other threads. So it might be appropriate. What's the URL? A brief search on Github didn't get me to the right repository.
HTH, Christophe
--- Steve H. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
I modeled the states of a simple network communication (implemented using boost::asio) using boost::msm. The states are: Disconnected -> Connecting -> Idle -> Sending Message -> Waiting for Response -> Idle
I have multiple threads, which use this network communication to send messages. While the state "Waiting for Response" is active, I cannot send any further message. So when a thread wants to send a message, I enqueue this in a separate queue and I read this queue only when entering state "Idle". The reason for the separate queue is that the transition "Waiting for Response -> Idle" can only be triggered through the boost::asio read handler.
Maybe this can be done easier with your new implementation?
It will but not yet because the queue handling is missing in the lockfree part. I need some time to finish this. Seeing the use case I think you might be using the wrong tool. What you want is not to enqueue your events in advance, what you want is to defer the event with the message until you enter Idle. You might want to have a look at deferred events. Then for the synchronization you can just lock any calls to process_event as one of them will eventually process the deferred event too. When I’m done with lockfree, you will simply have to remove the lock and it will work the same, just faster. I personally have often the same to implement and I use from my upcoming boost.asynchronous library a single_thread_scheduler to serialize calls.
[*] My case would be where most of the events are queued by a 'main' thread, with only occasional event queuing from other threads. So it might be appropriate. What's the URL? A brief search on Github didn't get me to the right repository.
Yes it might. Branch is: https://github.com/boostorg/msm/tree/msm3 As it's in dev there is no doc yet, but a few tests in test/Lockfree....cpp
I've looked on GitHub and can't seem to find the version of MSM you refer to below. Can you send me a link (I think you did this once, but I can't seem to find it now). As to your question on my application: I'm considering several possibilities: 1) One or more threads enqueueing events and a separate thread running the state machine processing them. 2) Multiple cooperating state machines that may send events to each other. --- Steve H. -----Original Message----- From: christophe.j.henry@gmail.com [mailto:christophe.j.henry@gmail.com] Sent: Monday, September 22, 2014 12:21 PM To: boost-users@lists.boost.org Subject: Re: [Boost-users] [multi state machine/ MSM] execute_queued_events
In the MSM doc section "Enqueueing events for later processing", you state that "Calling execute_queued_events() will then process all enqueued events (in FIFO order)."
You also said in a previous post that MSM is not thread safe, thus requiring an external mechanism to insure thread safety.
True.
I don't see a mechanism to execute a single event at a time, which would be preferable if I have to lock the queue while processing events. Having said that, a cursory examination of the "execute_queued_events" code makes me think it only processes one event per call.
Looks like the implementation is wrong, I'll fix this and provide a second member for single event processing. May I ask what is your use case? Do you have a thread enqueuing events and another executing of several threads doing both? I ask because I'm working on C++11 MSMv3 (msm3 branch on github) and I have an unfinished but promising implementation of a lockfree msm which is much faster than locking. It works only on simple fsms at the moment (no submachine and no pseudo entry/exit). It works this way: all threads process events, call guards and actions concurrently. In case a thread notices at the time of switching the current state (which is only a check of an atomic), it will roll back and retry. It's a bit advanced fsm building but if you feel like trying, there are some tests showing how to do and I'm ready to help. I ask your use case because it works best when a thread processes most of the events and one only from time to time. HTH, Christophe
Hi,
I've looked on GitHub and can't seem to find the version of MSM you refer to below. Can you send me a link (I think you did this once, but I can't seem to find it now).
For example https://github.com/boostorg/msm/tree/msm3, or if you want to see code: https://github.com/boostorg/msm/blob/msm3/test/Lockfree1.cpp It's still in construction, I'm suffering from an acute lack of time :(
As to your question on my application:
I'm considering several possibilities: 1) One or more threads enqueueing events and a separate thread running the state machine processing them.
This can be done with: - a lock before enqueueing, and a lock before executing le queued events. - the msm3 branch above (lockfree msm), I'm working on exactly this and hope to support this very soon. - my upcoming asynchronous library ;-)
2) Multiple cooperating state machines that may send events to each other.
This is what I want to achieve with the lockfree msm. It works so far if you only have simple fsm (no submachine, no queue , though I will speed this up if you need it). HTH, Christophe
participants (4)
-
christophe.j.henry@gmail.com
-
Hickman, Steve (AdvTech)
-
Manuel Schiller
-
Steve Hickman