Precision is not the same as accuracy


Accuracy is how close you are to the correct answer; precision is how much resolution you have for that answer.

Suppose you ask me, “What time is it?”

I look up at the sun, consider for a moment, and reply, “It is 10:35am and 22.131 seconds.”

I gave you a very precise answer, but not a very accurate one.

Meanwhile, you look at your watch, one of those fashionable watches with notches only at 3, 6, 9 and 12. You furrow your brow briefly and decide, “It is around 10:05.” Your answer is more accurate than mine, though less precise.

Now let’s apply that distinction to some of the time-related functions in Windows.

The GetTickCount function has a precision of one millisecond, but its accuracy is typically much worse, dependent on your timer tick rate, typically 10ms to 55ms. The GetSystemTimeAsFileTime function looks even more impressive with its 100-nanosecond precision, but its accuracy is not necessarily any better than that of GetTickCount.

If you’re looking for high accuracy, then you’d be better off playing around with the QueryPerformanceCounter function. You have to make some tradeoffs, however. For one, the precision of the result is variable; you need to call the QueryPerformanceFrequency function to see what the precision is. Another tradeoff is that the higher accuracy of QueryPerformanceCounter can be slower to obtain.

What QueryPerformanceCounter actually does is up to the HAL (with some help from ACPI). The performance folks tell me that, in the worst case, you might get it from the rollover interrupt on the programmable interrupt timer. This in turn may require a PCI transaction, which is not exactly the fastest thing in the world. It’s better than GetTickCount, but it’s not going to win any speed contests. In the best case, the HAL may conclude that the RDTSC counter runs at a constant frequency, so it uses that instead. Things are particularly exciting on multiprocessor machines, where you also have to make sure that the values returned from RDTSC on each processor are consistent with each other! And then, for good measure, throw in a handful of workarounds for known buggy hardware.

Comments (44)
  1. Moasat says:

    So, I’m assuming the WaitForSingleObject and WaitForMultipleObjects also use GetTickCount for timing? If I want a timeout of say 30ms, is it going to use GetTickCount and end up timing out somewhere between 10 and 55ms?

  2. LarryOsterman says:

    Mosat, it’s not going to use GetTickCount, but the GetTickCount and WaitForSingleObject use the same timer with the same resolution.

  3. Steve Hazel says:

    There is a LOT to windows timing.

    Tons of methods with tons of tradeoffs…

    We yack about it on this email list for

    midi developers:

    http://groups.google.com/group/mididev

    multimedia timers, directmusic timers, NT waitable timers, yada yada yada…

    I wish it was all simpler…:(

    …Steve

  4. Carlos says:

    I had a problem with broken performance data on a four-way server that had two of its processors running at one speed, and two at another speed. We discovered that QueryPerformanceCounter and QueryPerformanceFrequency gave different results depending on which of the pairs of processors they ran on, so the perfmon data was mostly garbage. That was eight years ago so it’s probably fixed now.

  5. Huh says:

    What a typical M$ post by Mr Smug-I’m-So-Much-Better-Than-You Chen. Boils down to: ‘you can’t have both accuracy and precision’.

    Err why not? It’s not like they’re mutually exclusive, in fact, in almost all fields of endeavor there’s a high degree of correlation! How about actually implementing a method to measure time both accurately and precisely, instead of patronizing us by explaining something every 12 year old highschooler is taught in science class?

  6. AC says:

    I’ve just read the information about RDTSC instruction, and Intel specifies that on Pentium M and similar processors the value is not incremented on the constant rate (and that’s logical, since the frequency changes depending on the CPU usage). Does Windows (2k, XP) in QueryPerformanceCounter take the RDTSC values on Pentium M or not? What’s more important for this Windows function — a correlation to seconds or a correlation to CPU clocks? I’d prefer the later.

  7. oldnewthing says:

    I’m sorry, Huh, that you find today’s entry insulting. But you’d be surprised how many people don’t understand the difference. Search Google Groups for “GetTickCount” and you’ll see lots of confusion. (Oh, and where did I say you can’t get both precision and accuracy? QueryPerformanceCounter usually gets you both, but at a cost in performance.)

  8. About 4 years ago, I had some interesting time-warped results from the QueryPerf APIs on an IBM ThinkPad A21p. This had a processor with SpeedStep, which I think was relatively new back then. The QueryPerf counters would change speed according to which of its speedstep power modes the processor was in… This made it hard to get meaningful results. :)

    (The System info control panel applet would also report a different clock speed depending on the processor power mode too.)

    …and to the aptly named: "Huh" try reading the article, and then try engaging your brain. Enjoy the novelty of this sensation.

    You accuse Raymond of saying ‘you can’t have both accuracy and precision’ but in fact he says nothing of the sort. The precision of the QueryPerf stuff is, as he says, variable, and from that you appear to have concluded, incorrectly, that the precision must be low.

    In fact, it’s very high – it’s always orders of magnitude better than any of the other APIs. In my experience is precision has always been at least as good as microsecond order, and it’s often orders magnitude better in the cases where it can use RDTSC.

  9. David Heffernan says:

    Huh, Raymond did not say "you can’t have both accuracy and precision". He actually said that the 2 concepts were different. I’m afraid that the topics discussed here are probably too complex for someone of your limited comprehension skills.

  10. Xavier says:

    Huh: In most fields, people are trained not to report any more precision than they have accuracy. If a physicist can only measure time to plus or minus 55 ms, he says so.

    Raymond is explaining that a function like GetTickCount doesn’t necessarily have millisecond precision just because it returns a value in milliseconds. That time measurement has to come from somewhere, and unfortunately computers on which Windows runs don’t have a standard high-accuracy, high-precision time clock that is also quick to access. So different APIs have made different tradeoffs, and this IS worth reporting on.

  11. Aleko says:

    I wonder why no one ever mentions the timeGetTime function in winmm.dll. It’s probably the simplest way to get the time, and it’s quite precise (~5ms), although I’m not sure about its accuracy.

    Interestingly, MSDN says that in Win95 the precision is 1ms, while in NT/2000 it can be 5ms or more. How can that be?

  12. gkdada says:

    About a year back, I spent 2 weeks studying nothing but timers and counters in Windows (for a borderline multimedia project).

    What Raymond says here is 100% accurate (and precise!). I don’t see any smugness in it. I think we just have to jump on Huh and beat him up in a proper democratic fashion.

  13. Lance Fisher says:

    Huh, you should try being nicer.

  14. A says:

    "About 4 years ago, I had some interesting time-warped results from the QueryPerf APIs on an IBM ThinkPad A21p. This had a processor with SpeedStep, which I think was relatively new back then. The QueryPerf counters would change speed according to which of its speedstep power modes the processor was in… This made it hard to get meaningful results. :)"

    Sadly, this same thing happens today (on XP x64) with the Cool’n’Quiet feature of AMD Athlon 64 processors (a SpeedStep equivalent). I’ve had to yank out all the QueryPerformanceCounter calls in my apps to work around this.

  15. Jare says:

    "I’ve had to yank out all the QueryPerformanceCounter calls in my apps to work around this."

    Which method have you reverted to? Good old GetTickCount()?

  16. autist0r says:

    what about

    <code>

    __declspec(naked) unsigned long GetCounter()

    {

    __asm { rtdsc }

    }

    </code>

    And by the way, keep in mind you most likely do not need a lot of precision with timing.

  17. A says:

    "Which method have you reverted to? Good old GetTickCount()?"

    Yes. The resolution isn’t as good, but at least it increments at a constant rate.

    I didn’t try timeGetTime; that might work too, provided it isn’t based on RDTSC.

  18. Davy says:

    autist0r: how about measuring? it would be nice to have at least a precision of a millisecond..

    QueryPreformanceCounter is nice for this.. but the downside is:

    http://support.microsoft.com/default.aspx?scid=KB;EN-US;Q274323&amp;

    anybody have an other solution?

  19. Chris says:

    I too have had problems with QueryPerformanceCounter(). One some el cheapo overclocked PCs, the LSBs were garbage and would sometimes go back in time! Needless to say, most apps do not like going back in time. :D

    The Multimedia timers are also interesting Windows timers. The MM timers sleep using WaitForSingleObject() or whatever, but wake up early. Then they spin in a busy loop until the actual wake-up time, so they can fire events very precisely.

  20. Steve Hazel says:

    Chris – how do you know MM timers spin?

    Inquiring minds wanna know…

  21. Richard says:

    Thanks for that informative post, very interesting.

    Richard

  22. Derek says:

    "Raymond is explaining that a function like GetTickCount doesn’t necessarily have millisecond precision just because it returns a value in milliseconds."

    I think you mean "[…] GetTickCount doesn’t necessarily have millisecond *accuracy* just because it returns a value in milliseconds."

    :)

  23. memet says:

    I implemented a kernel driver that connects to my USB webcam, and determines with very high accuracy (but no precision) the tick count by looking at the relative position of the sun. Of course, this method doesn’t work so well at night time, but I’m working on a lunar based mod.

    </joke>

    Have a nice friday night everyone.

  24. Brooks Moses says:

    The problem with that, memet, is that you can’t really have an accuracy that’s higher than your precision — there’s no way that a system which reports things to the nearest hour could have a millisecond precision, because the nearest hour is usually more than a millisecond away.

    (Meanwhile, your webcam system ought to be working fine at night, with a +/- six hour — or eight in winter at high latitudes — precision. All it needs to measure is "dark")

    I bring that up not to harsh on your joke, but because there’s an important point there: what your system has that’s valuable is a lack of drift — over really long periods of time, the fractional error is quite tiny (assuming the system stays running, of course!).

    Observation: a number for accuracy is nearly meaningless, unless one also specifies what length of time one’s measuring for.

    Consider, for instance, a hypothetical timer that’s usually off by 5%, and reports with a precision of 5ms. For a 2.0s time, it’s probably off by 100ms. But, for anything under 0.1s or so, it’s accurate to +/- 5ms. That’s very different from a timer that’s off by 100ms no matter how short the measurement time is.

    And so that brings me back to Raymond’s original post: He hasn’t really told us all that much about the accuracy of GetTickCount. Is that 10-55ms expected error still valid if I’m timing something that only lasts 20ms? What if I measure something that takes 20 minutes? Is this a sort of random error that effectively gets added to all measurements regardless of size, or is it a typical value of the drift over an "average" measurement time?

  25. josh says:

    If you can reliably repeat an operation enough times, you can get as much accuracy as you need out of GetTickCount, even better than 1ms.

    As for the actual behavior of GetTickCount: If you watch the output for a while, you’ll see that it will sit at one value and then jump up by 10 or 11ms. You’ll see even bigger jumps periodically, but that’s for a different reason.

  26. At http://developer.nvidia.com/object/timer_function_performance.html there is an article empirically comparing timing methods on a performance point of view.

  27. The QueryPerformance* functions fail unless the supplied LARGE_INTEGER is DWORD aligned. Shouldn’t that be documented? I wasted a couple of hours once trying to figure this out… :)

  28. jon says:

    Actually I would say that "precision" is the wrong term to use here. The dictionary.com definition is:

    pre·ci·sion

    The state or quality of being precise; exactness.

    1. The ability of a measurement to be consistently reproduced.

    2. The number of significant digits to which a value has been reliably measured.

    Neither of these definitions really seem to be what Raymond is talking about. I would stick with the term "resolution" rather than precision, since for the very reasons Raymond mentions, the values returned by these functions are neither accurate nor precise.

  29. RobL says:

    This article does raise an interesting point though; in maths and physics (or any of the sciences for that matter), for example, it is basically *illegal* to quote any of your figures in more precision than you have accuracy. It is not so in computing – a point worth re-iterating.

  30. Brian says:

    The problem with that, memet, is that you can’t really have an accuracy that’s higher than your

    > precision — there’s no way that a system which reports things to the nearest hour could have a

    > millisecond precision, because the nearest hour is usually more than a millisecond away.

    I disagree. It’s perfectly reasonable to have high accuracy but poor precision.

    Consider the scenario where you sample something repeatedly. If this thing is a natural process then when you plot the probability distribution function of these samples it will take the shape of a Standard Normal distribution, assuming there are enough samples (by the Law of Large Numbers.)

    This Normal distribution will have a mean and a standard deviation. If your measurements were accurate but not precise, then the mean would very accurately reflect the true value of the thing you were trying to measure, but the standard deviation would be large. Or in other words, it would be a wide but properly centered distribution. Conversely if you had some bias in your measurement but an otherwise precise apparatus, it would mean that you would have a very tall and sharp distribution, but centered at the wrong value.

    It is perfectly reasonable to expect an accuracy that is very much higher than the precision, sometimes by orders of magnitude, if these observations are averaged. This is the basis by which many scientific intruments operate, by the way. No one single reading can give the necessary accuracy, but with averaging you can trade off the time it takes to collect the readings for improved accuracy — as long as there is no systematic bias in your apparatus.

  31. Norman Diamond says:

    Friday, September 02, 2005 3:50 PM by Huh

    > I really should have my reproductive organs

    > ground to hamburger so as to prevent any

    > possibility of inflicting my corrupted DNA

    > on the gene pool.

    Nah, that’s excessive. All you have to do is patronize yourself by learning something every 12 year old highschooler is taught in mathematics class. It’s called logic. And in fact if you learn to think logically, then you might even learn enough to work on computers.

    Next you’ll need to keep your reproductive organs in order to put something like this on display:

    "My wristwatch is more accurate than Windows. You guys get ticks every 55ms or more or less and you don’t even know if it’s more or less. The only reason Microsoft even gives you the time of day is because NTP is enabled by default in Windows XP. Well, my wristwatch is around 300km from the nearest transmitter of atomic clock synchronized time signals, so due to the speed of light my wristwatch is around 1 millisecond behind. Permanently. Always. And how many milliseconds does it take for NTP to get you the time of day from Microsoft?"

  32. Michael Fitzpatrick says:

    Back in the days of windows 3.1 I used the 8253 timer chip directly and got about 840 nanoseconds of resolution since the clock was 1.19MHz. I found that I could read the timer of a 33MHz 486 in 40uSec. On a periodic event this offset is pretty constant so it washes out. When I ported the code to windows 98, surprize! The timer code still worked. I haven’t tried porting this to WinNT, and haven’t had the need. But another project on WinNT showed that the multimedia timer (MMTIMER.DLL) was pretty stable, about +/- 100 usec jitter with a 1 msec interupt/callback on a 486 33MHz.

    Anyways, I know that the QueryPerfFreq() API can return different freq’s, I just have never seen anything different than 1.19MHz (BTW, that’s where the 55 ms DOS INT 8 tick comes from, 840 ns * 64K ticks = 55 ms)

  33. Yadayadayada says:

    Actually it was accurate to within about two minutes when I read it.

  34. Huh says:

    I would like to point out that the other ‘Huh’ posting was not made by me.

    That I have to even say this speaks volumes about how selectively the ‘no impersonation’ rule is enforced around here: no surprise there.

  35. oldnewthing says:

    I deleted the fake ‘Huh’ posting.

    The "no impersonation" rule is enforced when people point out a violation. How am I supposed to know who the real ‘Huh’ is?

  36. Norman Diamond says:

    Monday, September 05, 2005 3:55 PM by oldnewthing

    > I deleted the fake ‘Huh’ posting.

    How do you know it was fake? When I lived in Toronto there were two Norman Diamonds in the phone book. (There’s only one in Ome now though.)

    And don’t forget, in the country where your ancestors emigrated from, one Hu is considering rehabilitating another Hu who was purged or something like that. Maybe one who was transliterated differently needs rehabilitating too, how do you think, huh?

  37. Ulric says:

    Can you clarify the concluding sentence of the the post? Do *we* have to do additionnal work for multiprocessors or ‘buggy hardware’ or is this handled by Windows?

    Why did the other poster have to yank the QPC code from his app? If you call QueryPerformanceCounter and Frequency properly for short periods, you shouldn’t have any problems, right?

  38. A says:

    "Why did the other poster have to yank the QPC code from his app?"

    I discovered that on CPUs with variable-speed clocks (e.g. Athlon 64 with its Cool’n’Quiet feature enabled) the actual frequency of the performance counter is equal to the current CPU clock frequency — it is NOT a fixed number.

    For example, on a 1.8 GHz Athlon 64 running at full speed the performance counter ticks 1,800,000,000 times per second. But when the load is light, the processor switches to 1.0 GHz, and the performance counter ticks only 1,000,000,000 times per second. As you can probably imagine, this behavior breaks any code that attempts to convert performance counter values into time figures, e.g.:

    count_before = QPC;

    … long operation …

    count_after = QPC;

    seconds_elapsed = (count_after – count_before) / QPF;

    QPF appears to always return the maximum processor frequency (1,800,000,000), so even if the frequency stays at 1.0 GHz during the entire operation, seconds_elapsed will still be off.

  39. oldnewthing says:

    "How do you know it was fake?"

    Because somebody posting as ‘Huh’ said so. If you guys are going to start an impersonation war then I’ll just turn off comments.

  40. oldnewthing says:

    Elias: All pointers must be properly aligned (unless explicitly notated to the contrary). That’s just a fundamental rule of the language (6.2.3.2.7), like "Don’t use memory after freeing it".

  41. Chris says:

    Steve Hazel: ok, so I cannot *personally* vouch for whether the MM timers spin, but I worked at a Seattle startup with some ex-Microsoft devs who worked in the MM group.

  42. All pointers must be properly aligned (unless explicitly notated to the contrary). That’s just a fundamental rule of the language (6.2.3.2.7), like "Don’t use memory after freeing it".

    True, but the x86 family doesn’t have any alignment requirements. All the other WinAPI functions work fine with arbitrary pointers on a x86. It seems odd that the QueryPerformance functions are so intolerant. Someone suggested this may be a HAL issue (msgid: <e4stZ1LSDHA.2228@tk2msftngp13.phx.gbl>).

  43. I recently wanted to add some performance measurements to an application. To avoid duplicating code everywhere I needed to make measurements, I coded up a small helper class.

Comments are closed.