How accurate are the various Windows time-querying functions?


Windows has a bunch of time-querying functions. One group of functions uses the system performance counter. These are as accurate as the hardware allows while still conforming to the basic requirements (such as running at a constant speed and being consistent among multiple processors).

  • Query­Performance­Counter gives you the current performance counter, and
  • Query­Performance­Frequency tells you the frequency of the performance counter.

Another group uses the system timer, which usually means 55ms or 10ms, although the time­Begin­Period function can be used to run the timer at a higher rate.

  • Get­Tick­Count, Get­Tick­Count64
  • Get­Message­Time
  • Get­System­Time, Get­Local­Time, Get­System­Time­As­File­Time
  • Query­Interrupt­Time, Query­Unbiased­Interrupt­Time

And then there are the so-called precise¹ variants of the system timer functions. These take the system timer value and combine it with the system performance counter to get a high-accuracy timestamp. It's not only querying two timers, but it's also doing additional computations to combine the values, so naturally this is slower than querying just one of the two timers, but hey, a fine wine takes time.

  • Get­System­Time­Precise­As­File­Time

  • Query­Interrupt­Time­Precise, Query­Unbiased­Interrupt­Time­Precise

These high-accuracy functions give you the best of both worlds: Time correlated with real-world clocks, but with the accuracy of the system performance counter. But as noted, it comes at a performance cost.

¹ Though to be pedantic, they should be called the accurate variants.

Comments (23)
  1. Somewhat relevant blog entry about the accuracy of counters, especially when it comes to micro/nano-benchmarking.

  2. IanBoyd says:

    For high-accuracy QueryPerformanceCounter or GetSystemTimePreciseAsFileTime, you might want to know which you should use for accurate (100ns resolution) timings.

    While both are equally accurate, you must remember that GetSystemTimePreciseAsFileTime can sometimes run faster or slower if the system’s clock is being incremented somewhat faster or slower to put the local clock back into sync with a time reference. You can use the GetSystemTimeAdjustment API to check if your clock is being under or over ticked to bring it back in sync with a time reference.

    For example, at this very moment, GetSystemTimeAdjustment says my clock is adding 15.6248 ms per update, rather than the nominal 15.6250 ms per update. This means that Get­System­Time­Precise­As­File­Time is ticking slightly slower than QueryPerformanceCounter.

    This brings us to how you decide which you want:

    – if you want a highly accurate clock, to record *when* something happened: use Get­System­Time­Precise­As­File­Time
    – if you want to measure intervals of time, say for benchmarking or telemetry, use QueryPerformanceCounter

  3. Adrian says:

    Please don’t use timeBeginPeriod to increase the timer frequency unless absolutely necessary. Not unless you hate your users. The Microsoft documentation has made this recommendation approximately forever, but it needs more emphasis.

    Bruce Dawson has written a detailed blog post on how increasing the timer frequency hurts everyone.

  4. Ken Hagan says:

    “… the system timer, which usually means 55ms or 10ms, …”

    Odd, I’m fairly sure I haven’t used a system that ticked at 55ms for many years. I assumed that they went away with some motherboard/chipset/thingummy-related evolution of the PC platform. Are there still systems ticking that slowly, by default?

  5. Clockwork-Muse says:

    A point that some people miss in the “`Get­System­Time­Precise­As­File­Time` is slower!” mantra is that the method is more accurate than any supposed slowdown. The numbers from when corefx switched over are interesting – at 28mil calls a second, or ~35ns, it was 1/2 to 1/3 the speed… except `Get­System­TimeAs­File­Time` et al only have an accuracy of about ~15ms, meaning they returned a different value only after ~500k distinct values of the more precise time.

    Conclusion: The speed you’re depending on would be an illusion. If you’re getting the time in a loop _that tight_, you’d be better served pulling the call out. If you want to get pretty accurate times in a loop, use the more precise version and just call it half as often.

  6. You can find a detailed low-level overview of main time-related functions for .NET Framework, .NET Core, and Mono on Windows and Unix here (including information about hardware timers like TSC, ACPI, HPET, ):
    * Stopwatch under the hood: http://aakinshin.net/blog/post/stopwatch/
    * DateTime under the hood: http://aakinshin.net/blog/post/datetime/

    1. In fact, frequency of the system timer on Windows 10 is usually 64Hz (resolution is 15.625ms). Typically, modern applications (like browsers, media players, and so on) request increased frequency like 1000Hz or 2000Hz (resolution is 1ms or 0.5ms). The 55ms resolution was actual for old version of Windows like Windows XP.

      1. Correction: 55ms was the actual value of Windows 95/98/Me

      2. Apparently Chrome finally stopped asking for 1ms resolution after enough people made a point on the bugs about it, but Firefox still does (as seen in my powercfg report). My monitor only runs at 30Hz or 60Hz, there’s no way I’ll ever need 1000Hz of smoothness! There’s no good reason why anything that needs to be that smooth can’t tie itself to the audio source, and get a full 48kHz timer just for the duration of media playback.

        1. ChrisR says:

          It’s not that the programmer thinks the media will run at 1000Hz of course, but that they want to be able to wait for very short periods accurately. If your source is 30Hz and frame processing takes ~20ms for example, then you might want to wait for ~10ms before displaying the frame. Setting the time period so your resolution is 30ms will then result in choppy playback.

  7. Ben Voigt says:

    Sorry Raymond, but I agree with whoever named the APIs. The precise variants have improved precision, but their accuracy is no better than the system clock (not timer, the real-time clock used for absolute date and time).

    That is: If you subtract two values taken from the same workstation with no intervening suspend, the results will be amazing (valid to the precision of the HPET). If you subtract two values taken from the same workstation but with intervening suspend, the results will be valid to the resolution of the RTC. And if you compare two values taken from different workstations, the result is valid only to the accuracy of the RTC (when was the last time you synced to an atomic clock using NTP on a jitter-free network?)

    Because the values are comparable only to other values from the same source, they are high precision but not high accuracy.

    1. The precise and imprecise versions have exactly the same precision: 100ns.

      1. Ben Voigt says:

        No, that’s the resolution. Resolution deals with the encoding of the result, precision deals with repeatability on a single instrument, accuracy deals with agreement between instruments.

        1. Oh great, now there are three words I need to keep track of.

  8. rossy2401 says:

    Is it true that Get­Tick­Count and Get­Message­Time are in the same class as Get­System­Time­As­File­Time and Query­Interrupt­Time? For some reason I thought they updated more slowly for legacy reasons.

  9. AlexShalimov says:

    When I experimented with these functions (on Windows 7), it turned out that timeBeginPeriod does not affects GetTickCount and other, only timeGetTime (which is sadly omitted here). Query­Performance­Counter is precise, but consumes too much CPU for each call.

  10. Medinoc says:

    Given how Get­System­Time­As­File­Time() worked last I checked (which is, by simply reading a value in the TEB), I was under the impression its “actual precision” was the thread time quantum of 1/64s (15.625 ms), rather than 10ms.
    At least, I found out the heard way that the precision of GetThreadTimes() was indeed 15.625ms; I may have (possibly wrongly) extrapolated from that.

    1. Medinoc says:

      And by “actual precision”, I guess “accuracy” was the word I was looking for.

    2. skSdnW says:

      Its read from the shared data page, not the TEB.

      1. Medinoc says:

        Ah, that makes more sense. Thanks.

  11. Joshua Schaeffer says:

    What is the timer precision of CreateThreadpoolTimer(…)? I can’t seem to find it anywhere and I’m desperate to know.

    1. Joshua Schaeffer says:

      Also, what is the functional difference between SetThreadpoolTimer(…) and SetThreadpoolTimerEx(…)? They’re defined and documented identically.

Comments are closed.

Skip to main content