RE: [Boost-Users] Using date_time::microsec_clock in windows ?
Jeff Garland wrote:
It actually shouldn't be too hard. It is really a matter of swapping out the Windows API function calls to return the times and then adjusting appropriately. (follow-ups should probably be diverted to the Boost developers list)
I'm not sure if this has been discussed or not, but it may be harder than
you imagine. The Windows API functions dealing with time have a resolution
of approximately 10 milliseconds. Even the functions that return values of
hectonanoseconds (100 nanoseconds) are limited to this resolution.
To see what I mean, run this program:
#include
Jeff Garland wrote: It actually shouldn't be too hard. It is really a matter of swapping out the Windows API function calls to return the times and then adjusting appropriately.
Jim.Hyslop wrote:
I'm not sure if this has been discussed or not, but it may be harder than you imagine. The Windows API functions dealing with time have a resolution of approximately 10 milliseconds. Even the functions that return values of hectonanoseconds (100 nanoseconds) are limited to this resolution.
There's QueryPerformanceCounter, but this may occasionally jump by several seconds because of workarounds for some chipsets (Q274323). On uniprocessor PCs, I think this has a frequency of 1.1931817MHz. (64K cycles gives the 55ms resolution of 16-bit Windows timers.) On multi-processor (including hyper-threaded P4) systems, it is the processor clock speed, which nowadays requires a 64-bit integer unless you scale it. There's also the RDTSC instruction on most x86 CPUs, but uniprocessor Windows systems may do a HALT instruction when idle, making this unreliable too (http://www.sysinternals.com/ntw2k/info/tips.shtml#Idle). In all cases, I suppose the workaround is to use the low resolution clock to sanity-check the high resolution values. This may be the sort of "appropriate adjustment" that Jeff was referring to.
participants (2)
-
Jim.Hyslop
-
Ken Hagan