boost high precision timer and boost io_service
Hi, I am currently running boost::asio::deadline_timer to transmit data every minute, the requirement needs the interval accurate, if the first data sending at 10:00:05, the next needs be 10:01:05 and so on, but my data sending time shifting every 2 seconds I need to fix it. I am not sure if the boost high precision timer can fix it, it seems to me like a realtime requirement. My whole application is based on boost::asio::io_service in a single thread on Linux embedded system, can the boost high precision timer fix the problem or will the the boost multithreading fix the problem or there might be no chance to match the requirement simply because Linux is not a realtime OS? Appreciate your insight advice. Thank you. Kind regards, - jupiter
On 8/21/2019 4:05 PM, JH via Boost wrote:
there might be no chance to match the requirement simply because Linux is not a realtime OS?
You're at the mercy of the scheduler. Another process can always hog the CPU and lock you out until it's done. Have you reniced your process to increase your priority over other tasks? I recall the scheduler has acquired a lot of "knobs" over the last couple decades so this might require some research to see if any of them can help you. (It's an interesting problem so I'm interested in hearing what solutions you find.)
Hi Kenneth,
On 8/22/19, Kenneth Porter via Boost
On 8/21/2019 4:05 PM, JH via Boost wrote:
there might be no chance to match the requirement simply because Linux is not a realtime OS?
You're at the mercy of the scheduler. Another process can always hog the CPU and lock you out until it's done. Have you reniced your process to increase your priority over other tasks? I recall the scheduler has acquired a lot of "knobs" over the last couple decades so this might require some research to see if any of them can help you. (It's an interesting problem so I'm interested in hearing what solutions you find.)
Yes, I am doing the experiment to increase process priority, it will help, but don't know how much. I am not quite sure if replace boost::asio::deadline_timer by high_resolution_timer will help or not, or if multithreading will help, the device is running simple process to generate data and to transmit data, there will be lots of work to change to multithreading structure in application, but I doubt the benefits running single threading vs multithreading in a single processor using i.MX6 MCU, appreciate your opinion. Thank you. - jupiter
On 22/08/2019 13:55, JH wrote:
Yes, I am doing the experiment to increase process priority, it will help, but don't know how much. I am not quite sure if replace boost::asio::deadline_timer by high_resolution_timer will help or not, or if multithreading will help, the device is running simple process to generate data and to transmit data, there will be lots of work to change to multithreading structure in application, but I doubt the benefits running single threading vs multithreading in a single processor using i.MX6 MCU, appreciate your opinion.
Timer resolution doesn't matter if you're looking at one-second-level precision anyway. Your problem lies elsewhere. If you have a higher-priority thread which is mostly asleep but scheduled to wake up every minute, it can interrupt the lower priority processing and do that critical work quickly and on time -- as long as you write it in a way that it doesn't need to acquire a mutex or otherwise go back to sleep as soon as it wakes up. This is a good thing, but you do have to write it carefully and correctly, and use the right data structures for the task. If you have a single thread then you're at the mercy of whatever other work you're doing -- if that runs late then your timed work will also run late. (This also applies to single process vs. multi process -- if it's easier for you to put your high priority code in a separate process rather than a separate thread, then that would also work. Some people find processes easier to work with than threads.)
On 8/22/19 3:55 AM, JH via Boost wrote:
Yes, I am doing the experiment to increase process priority, it will help, but don't know how much. I am not quite sure if replace boost::asio::deadline_timer by high_resolution_timer will help or not, or if multithreading will help, the device is running simple process
It is not only a matter of processing priority. Your data can also be delayed because the network is busy. However, I suspect that your main problem is not so much that a single timeout is delayed, but rather that a delay causes a shift in the subsequent timeouts. In that case you could measure the time between two timeouts, and then compensate for any deviations by making the next expiration time accordingly shorter or longer. In other words, you want a sequence of expiration time like this: T + 1 * delta T + 2 * delta T + 3 * delta T + 4 * delta but because of delays you are actually getting have an accumulated error: T + 1 * delta T + 2 * delta + error T + 3 * delta + error T + 4 * delta + error With compensation you will get: T + 1 * delta T + 2 * delta + error T + 3 * delta - error (this is the compensation step) T + 4 * delta
On 8/22/19, Bjorn Reese via Boost
On 8/22/19 3:55 AM, JH via Boost wrote:
Yes, I am doing the experiment to increase process priority, it will help, but don't know how much. I am not quite sure if replace boost::asio::deadline_timer by high_resolution_timer will help or not, or if multithreading will help, the device is running simple process
It is not only a matter of processing priority. Your data can also be delayed because the network is busy.
However, I suspect that your main problem is not so much that a single timeout is delayed, but rather that a delay causes a shift in the subsequent timeouts. In that case you could measure the time between two timeouts, and then compensate for any deviations by making the next expiration time accordingly shorter or longer.
Very good point, it is almost impossible to get realtime effect in Linux, but at least the compensation can mitigate the time shifting.
In other words, you want a sequence of expiration time like this:
T + 1 * delta T + 2 * delta T + 3 * delta T + 4 * delta
but because of delays you are actually getting have an accumulated error:
T + 1 * delta T + 2 * delta + error T + 3 * delta + error T + 4 * delta + error
With compensation you will get:
T + 1 * delta T + 2 * delta + error T + 3 * delta - error (this is the compensation step) T + 4 * delta
That was exactly I should do. Thanks Bjorn and Gavin. Kind regards, - jupiter
participants (4)
-
Bjorn Reese
-
Gavin Lambert
-
JH
-
Kenneth Porter