Precision and accuracy of DateTime


StopwatchThe DateTime struct represents dates as a 64 bit number that measures the number of “ticks” since a particular start date. Ten million ticks equals one second.

That’s a quite high degree of precision. You can represent dates and times to sub-microsecond accuracy with a DateTime, which is typically more precision than you need. Not always, of course; on modern hardware you can probably execute a couple hundred instructions in one tick, and therefore if you want timings that are at the level of precision needed to talk about individual instructions, the tick is too coarse a measure.

The problem that arises with having that much precision is of course that it is very easy to assume that a given value is as accurate as it is precise. But that’s not warranted at all! I can represent my height in a double-precision floating point number as 1.799992352094 metres; though precise to a trillionth of a metre, it’s only accurate to about a hundredth of a metre because I do not have a device which can actually measure my height to a trillionth of a meter, or even a thousandth of a metre. There is way more precision than accuracy here.

The same goes for dates and times. Your DateTime might have precision down to the sub-microsecond level, but does it have accuracy? I synchronize my computers with time.gov fairly regulary. But if I don’t do so, their clocks wander by a couple of seconds a year typically. Suppose my clock loses one second a year. There are 31.5 million seconds in a year and 10 million ticks in a second, so therefore it is losing one tick every 3.15 seconds. Even if my clock was miraculously accurate down to the level of a tick at some point, within ten seconds, it’s already well off. Within a day much of the precision will be garbage.

If you do a little experiment you’ll see that the operating system actually gives you thousands of times less accuracy than precision when asked “what time is it?”

long ticks = DateTime.Now.Ticks;
while(true)
{
    if (ticks != DateTime.Now.Ticks)
    {
        ticks = DateTime.Now.Ticks;
        Console.WriteLine(ticks);
    }
    else
    {
        Console.WriteLine(“same”);
    }
}

On my machine this says “same” eight or nine times, and then suddenly the Ticks property jumps by about 160000, which is 16 milliseconds, a 64th of a second. (Different flavours of Windows might give you different results, depending on details of their thread timing algorithms and other implementation details.)

As you can see, the clock appears to be precise to the sub-microsecond level but it is in practice only precise to 16 milliseconds. (And of course whether it is accurate to that level depends on how accurately the clock is synchronized to the official time signal.)

Is this a flaw in DateTime.Now? Not really. The purpose of the “wall clock” timer is to produce dates and times for typical real-world uses, like “what time does Doctor Who start?” or “when do we change to daylight savings time?” or “show me the documents I edited last Thursday after lunch.”  These are not operations that require submicrosecond accuracy.

(And incidentally, in VBScript the “wall clock” timer methods built in to the language actually round off times we get from the operating system to the nearest second, not the nearest 64th of a second.)

In short, the question “what time is it?” really should only be answered to a level of precision that reflects the level of accuracy inherent in the system. Most computer clocks are not accurately synchronized to even within a millisecond of official time, and therefore precision beyond that level of accuracy is a lie. It is rather unfortunate, in my opinion, that the DateTime structure does surface as much precision as it does, because it makes it seem like operations on that structure ought to be accurate to that level too. But they almost certainly are not that accurate.

Now, the question “how much time has elapsed from start to finish?” is a completely different question than “what time is it right now?” If the question you want to ask is about how long some operation took, and you want a high-precision, high-accuracy answer, then use the StopWatch class. It really does have nanosecond precision and accuracy that is close to its precision.

Remember, you don’t need to know what time it is to know how much time has elapsed. Those can be two different things entirely.

Comments (30)

  1. Focus says:

    Time in windows has always been a little "bonkers". It might depend on each system but if one stops to lock at the task bar  watch, every few seconds the system seems to stall and its obvious that the seconds are not evenly spaced at all.

    My question is ¿Is the taskbar clock synchronized with DateTime.Now or is the obvious stalling in the clock some UI update hiccup that DateTime is unaware of? Because if they are, then errors are not in the range of 10 ms at all, unless it evens out when you measure a long enough time period.

  2. Gabe says:

    The problem with the StopWatch class is that, while it is extremely precise, it is not guaranteed to be accurate. The source it uses for its tick count may be different on different CPUs, causing incorrect results when you stop the clock on a different CPU than you start it on. Furthermore, it may count at a different frequency in power-saving modes, which could be perfect for microbenchmarking code, but useless as an indicator of when an Ethernet packet arrived.

    I would also add that I rather like that DateTime has so much precision built in (even if it implies that DateTime.Now has more precision than it does), so I can use the same data structures and functions on data that represents birthdays, system times, and when Ethernet packets arrived. This is much preferable to other systems that require different representations and thus different libraries for each of those situations.

  3. I don’t know that I would go so far as to say DateTime supporting more preciscion than is provided by the hosting hardwareplatform is a bad thing.

    At no extra cost to developers DateTime can support more precise hosts in the future, in my book that’s a good thing.

    –Ifeanyi Echeruo

  4. Leo Bushkin says:

    Stopwatch is definitely one of those great little utility classes that many people are under-aware of. Being able to roll your own code performance timer using Stopwatch is invaluable in cases where you want to profile a very narrow area of code and you don’t have time to break out an actual code profiler.

    Something that often goes hand in hand with Stopwatch is the MethodBase.GetCurrentMethod() which reports the reflection info of the currently executing method. Unfortunately, you can’t centralize this into a utility helper method – since GetCurrentMethod() reports the method actually running. What would be nice would be a GetCallingMethod() method that looks at the call stack frame just above the current one. You can of course write your own stack-frame walking code … but who wants to do that 🙂

  5. SSLaks says:

    @Leo Bushkin:

    new StackFrame(1).GetMethod()

  6. Jon Skeet says:

    The irony is that we’ve taken this on board for Noda Time as well – as the common "smallest unit of time" in .NET is a tick, we felt we needed to support that in Noda too. Joda Time – which we’re porting from – only supports down to millisecond precision.

    On the other hand, I suppose it means we can use the same types for stopwatch measurements and other measurements. My biggest gripe about Stopwatch is that it uses ticks as well – but to mean something entirely different from ticks in DateTime/TimeSpan. Grrr.

    Fun fact: for time zones, we actually represent offsets in milliseconds. It’s possible that that’s overkill – seconds would probably have been okay. Minutes wouldn’t have been, however – there have been time zones with offsets from UTC of "9 minutes and 21 seconds" and similar.

    I don’t have too much problem with DateTime having too much accuracy, so long as everyone *knows* it.

  7. Jon Skeet says:

    Doh – amendment to final comment… I don’t have a problem with DateTime having too much *precision* so long as everyone knows it. Precision, not accuracy.

  8. Jeff Lorenzini says:

    Pass GetCurrentMethod() as a parameter to the logging function.

  9. Tom W says:

    Thanks Eric, that’s a pretty useful summary, straight from the horse’s mouth as it were.

    I do think this needs to be made far more explicit in the framework documentation. A colleague of mine (a behavioural psychologist interested in human reaction times) claims to have been trying to get a straight answer from all sorts of experts regarding the accuracy of Windows timing for around a decade now. I’ve been exploring the matter recently and I’m still picking up all sorts of contradictory statements. All I really want is a reliable piece of documentation that states unequivocally the factors influencing the margin of error in a Stopwatch elapsed time figure in a quantitative way, so that I can extrapolate a scientifically rigorous lower bound. Apparently nothing on the web is able to give me this.

  10. An excellent (as usual) post. One additional item to mention is to NOT use DateTime.Now for calculations. Two reasons. (in no particular order)

    1) it is MUCH higher overhead than DateTime.UtcNow

    2) It WILL give you errrors in most US locations twice a year.

    I actually was involved with one company (Eastern US) who used "Bogota" time to avoid the time jumping as Daylight Savings kicked on/off. The side effect was that ALL compuiter clocks were off by 1 hour during the summer…..DELIBERATELY!!!

  11. Lonli-Lokli says:

    About DateTime – very old thing 🙂 Richter mentioned it in his book, as i remember it’s because of standard windows win32 timer, not because of .net or datetime.

  12. L. says:

    Something that is really missing is a way to get a precision counter (HPET) readingat the precise moment when the system datetime counter was last incremented (other than looping to check whether the datetime has changed).  This would make it far easier to implement a good time synchronization scheme.

  13. pete.d says:

    "The problem with the StopWatch class is that, while it is extremely precise, it is not guaranteed to be accurate."

    Thankfully, this isn’t quite true. There are computer systems with faulty BIOS for which StopWatch can suffer the problems described. But that’s not a normal affair. On a correctly working system, StopWatch is fine within the documented limits of the class.

  14. francis d says:

    How about this? Let’s say I have to do something every 1 minute. Here are two possible ways that I can solve the problem:

     Solution A

     DO it

     WAIT 1 minute

     DO it

     WAIT 1 minute

     …

     Solution B (assume I start at 12:00)

     DO it

     WAIT until 12:01

     DO it

     WAIT until 12:02

     …

    But, if what has to be done takes a noticeable amount of time, let’s say 30 seconds, then the result from Solution A will be very different from Solution B.

     Time          Solution A      Solution B

     ——–      ———-      ———-

     12:00:00      DO it           Do it      

     12:00:30      WAIT 1 min      WAIT until 12:01

     12:01:00                      DO it

     12:01:30      DO it           WAIT until 12:02

     12:02:00      WAIT 1 min      DO it

     12:02:30                      WAIT until 12:03

     12:03:00      DO it           DO it

     12:03:30      WAIT 1 min      WAIT until 12:04

     12:04:00                      DO it

     .             .               .

     .             .               .

     .             .               .

     much later    gets worse      still on schedule

    I’ve met someone who had a similar problem, used Solution A (or something like it), but was expecting to have results similar to Solution B. When he told me his story, I thought to myself, hey, that’s kind of like the problem with dead reckoning.

    But how else can this problem of drift be minimized? I thought a clock would be a good point of reference to use to get back on course.

  15. Chris says:

    “what time does Doctor Who start?”

    That’s surprising. I thought Dr. Who was a curiously UK phenomenon, and over in the US you hadn’t even heard of it, let alone appreciate it.

    (And did you see the start of the new series? Fantastic!)

    I grew up watching Dr. Who on WNED (PBS Buffalo, New York). As a child the opening credit sequence alone terrified me, though I would occasionally watch it “from a position of safety behind the couch” as they say. I became a big fan as a teenager; I still have a complete set of the Marvel reprint of Dr. Who Comics somewhere in the house, which as a teenager represented a signficant fraction of my monthly income. It is reasonably well known in the US, though when I wear my Tom Baker scarf, hardly anyone comments on it anymore; that series seems to no longer have much pop culture currency in the United States.

    I’ve seen the first two seasons of the reboot and I am mostly favourably impressed; they seem to have done a good job of staying true to the wit, humour, scariness and cheerful low-budget making-do of the original run. I’ve not had a chance to sit down and watch the later seasons; eventually I’ll pick them up on DVD or watch them on Netflix On Demand.  — Eric

  16. ShuggyCoUk says:

    "I don’t know that I would go so far as to say DateTime supporting more preciscion than is provided by the hosting hardwareplatform is a bad thing.

    At no extra cost to developers DateTime can support more precise hosts in the future, in my book that’s a good thing."

    Yeah, big ++ to this (and glad that Datetime2 exists now in sql server)

    It is entirely possible to get better precision from other devices, having the precision in the standard struct is so much nicer, and meant that our use of such devices didn’t involve a tedious replacement of DateTime with some other one everywhere… I viewed MS’s provision of DateTime with this level of precision in storage as very forward thinking and extremely sensible, I’m surprised you don’t think it was a good idea.

    Note that if MS ever start using the HPET timer in modern systems for the system time (as many linux distributions now do) they will almost instantly start hitting well beyond millisecond level precision. Even when the accuracy is off the offset is likely to remain within this level of precision over the course of a day so it is still useful. In fact given there truly being no one true time in a relativistic sense the ability to have decent local precision right now is really why having it baked into the struct is a good thing.

    The next issue will be people assuming this level of precision in DateTimes/TimeSpans means that they can request waits/pauses of OS level constructs with that level of granularity. That they can do this already with milliseconds being the precision ‘exposed’ by Sleep() but the scheduler only giving 10-15ms of actually granularity as it stands suggests that it is these constructs that need either better documentation or more ‘obviousness’ in the type of their arguments.

  17. Ragnaroknrol says:

    Chris:

    PBS (Public Broadcasting Station) has affiliates all over the US.  I have fond memories of donation drives where the bonus for making a $200 or more donation was a rather long scarf…  I almost stole my father’s checkbook.  

    It took off in the US in geek cricle during the Tom Baker era, mostly.  Among the geeks it was a huge phenomenon.  They attempted to make a new, American Dr. in the late 90s in the US, it bombed thanks to it being far too different from the British version.  It had a pilot and got buried.  

    Syfy Channel (formerly Sci Fi Channel) ran the latest series of the Dr. about a season behind.  BBC US will show it soon.  I am guessing SyFy will snatch it up and show it behind the times a bit too.  

    (I loved Torchwood as well, and saw it on BBC US)

    As for the actual comment, I know when Dr. Who is on.  Whenever I want it to be thanks to Tivo!  🙂

  18. Richard says:

    > what time does Doctor Who start?

    Given The Doctor can’t distinguish 5 minutes and 10 years, I think there is a whole different accuracy argument when it comes to Time Lords.

    (And the answer is 2010-04-10T18:15 BST.)

  19. Gabe says:

    ShuggyCoUk: I recall hearing that they intentionally don’t have a higher-resolution clock because it uses too much power. When a CPU has to wake up 1000 times per second to increment the clock ticks, it can’t spend much time in low-power sleep states. Keeping the clock at only 64 Hz enables lower power consumption and longer battery life. In fact, in Win 7 they went to great lengths to reduce timer interrupts (implementing timer coalescing).

    pete.d: You seem to have a different definition of "guarantee" than I do.

    Chris: I grew up watching Dr. Who in the early 80s in America.

  20. Stuart says:

    Frustratingly, BBC America has decided to air new Dr Who episodes approximately two weeks later in the US than in the UK.

    Thereby guaranteeing that half their audience will torrent it and they’ll lose a ton of ad revenue.

    For those of us who wait to watch it until it actually airs, that’s doubly frustrating, because most of our friends have already seen it…

  21. jsrfc58 says:

    I think most of these issues would go away when they finally start installing atomic clocks on the motherboard.

    http://en.wikipedia.org/wiki/Atomic_clock

  22. Konstantin says:

    "On my machine this says “same” eight or nine times, and then suddenly the Ticks property jumps by about 160000, which is 16 milliseconds, a 64th of a second"

    Weired. On my PC (old core 2 duo) it jums by same ~160000, but it prints ~3600 of "same" in between when I run it from command line, and ~500 when I run it from Visual Studio 2005, which is 7x slow down, which is descent. Eric is talking about 60x slow-down comparing to Visual Studio 2005, which is scary.

    Eric, do you use the latest Visual Studio? Is there something fundamentally wrong with output window performance? Or am I missing something?

  23. Aaron says:

    Unfortunately, Stopwatch does not exist in Silverlight 3 or 4, and the only way I see to implement it is using DateTime.Now. Is there another way to get a more precise elapsed time measurement on Silverlight?

    You’ll have to ask someone who is an expert on Silverlight. I wouldn’t know. — Eric

  24. M. McCulloch says:

    "It is rather unfortunate, in my opinion, that the DateTime structure does surface as much precision as it does, because it makes it seem like operations on that structure ought to be accurate to that level too."

    But, Eric, it’s still nice to have a convenient type for holding and manipulating high precision time values. The time values themselves may come from an external source.

  25. @M. McCulloch:

    "But, Eric, it’s still nice to have a convenient type for holding and manipulating high precision time values. The time values themselves may come from an external source."

    You take the words right out of my mouth, the DateTime struct is in no way bound to DateTime.Now (system time), DateTime.Now just happens to be one of the users of the struct.

  26. Steve says:

    @francis d

    You’re not considering the time it takes to "DO it"; thus you’re accumulating drift each time you perform a "DO it" operation.

  27. Mario says:

    Hi Eric!

    What you write is right, but I’d see the accuracy of DateTime fron another viewpoint. I’ve build an app displaying historical data (generated by a SCADA), that records thousands of samples marked by a timestamp. Well, obviously neither our hw is capable to guarantee tenths of us, but I may have to record samples generated by a 10MHz (and over) source, for example. This isn’t so special case…In that way, probably neither the 0.1 us resolution would be enough.

    Good article, anyway.

    Cheers

  28. Nick says:

    I actually receive time values with this level of precision from my instrumentation and it is nice that I don’t need any special constructs to deal with them in .NET after parsing them into the appropirate DateTime or TimeSpan objects.

    So I can use the standard DateTime.Add/Subtract/etc to group sets of 1ms pulses and then if I do a properly formatted ToString I’ll get the ‘wall-clock-time’ for the first line in the report.

    I’ll tell ya, it’s not quite as fun trying to do those kinds of things in C++ or other frameworks

  29. Eric Newton says:

    francis: its because your friends process doesn't start waiting 1 minute from the beginning of the minute, but AFTER the process runs, which is indeterminate, ie, the process that runs may take 1 second one time (no work performed) then 30 seconds the next time.  Your 'drift' is due to the process run time, and using a method like "Sleep(1min)" will cause your process to drift indeterminately because of it.

    Process A

    12:00:00 run.

    12:01:01 first run complete [WAIT 1 MINUTE]

    12:02:01 run.

    12:02:31 second run complete [WAIT 1 MINUTE]

    12:03:31 run.

  30. Sly Gryphon says:

    Do not use DateTime; use DateTimeOffset instead.

    "DateTimeOffset should be considered the default date and time type for application development"

    msdn.microsoft.com/…/bb384267.aspx