How It Works: SQL Server No Longer Uses RDTSC For Timings in SQL 2008 and SQL 2005 Service Pack 3 (SP3)

Many of you have encountered the RDTSC timing variances that I outlined in an earlier blog post:   https://blogs.msdn.com/psssql/archive/2006/11/27/sql-server-2005-sp2-will-introduce-new-messages-to-the-error-log-related-to-timing-activities.aspx and https://blogs.msdn.com/psssql/archive/2007/08/19/sql-server-2005-rdtsc-truths-and-myths-discussed.aspx 

Several years ago SQL Server 2008 builds removed pure RDTSC, microsecond timings and replaced them with an MMTIME like, timing mechanism.   The timer (see timeGetTime for similar documentation) removes variances that can be experienced by the RDTSC timings.  Lazy writer no longer reports false I/O stall warnings, trace and statistics time values are reliable as well as other timing values produced by SQL Server.

SQL Server 2005 Service Pack 3 (9.00.4035 - KB 931279) now contains the SQL Server 2008 MMTimer like implementation so you no longer encounter inaccurate timings on SQL Server 2005 SP3 or newer releases.

Why Do I Still See Microseconds?

The SQL Server Books Online columns for DMVs (Example View: sys.dm_exec_query_stats Column: total_worker_time) are still documented as outputting microseconds.   For SQL Server 2005 RTM many areas of the server were updated to output microsecond timings.  Changing these areas back to millisecond outputs would break applications and limit the granularity SQL Server can provide.  

How It Really Works

NOTE: It is too difficult to explain the entire computer clock implementation in a simple blog. I have picked some references to help explain the design but individual computer designs can vary and I have also made several generalizations in explaining the behavior.

Windows keeps track of the computer provided clock values.  This information is returned from (GetTickCount, timeGetTime, 64 bit tick count, ...), the shared memory location.   The timeGetTime API is based on the interrupt ticks which are stored in shared memory.  SQL Server uses the 64 bit precise, shared memory, interrupt counter to obtain interrupt level granularity, like timeGetTime, which is consistently accurate versus the RDTSC and instead of using QueryPerformanceCounter (QPC) which requires more CPU cycles.

To accomplish the ~1ms timing the timer resolution is changes to 1ms system wide (like the timeBeginPeriod API) allowing SQL Server to obtain a better timer.

Windows stores the interrupt information on the shared page in increments at a rate of 10000 (wall-clock of 100-nanosecond intervals).

0.00000001 [ billionth ] nanosecond [ ns ]
0.000001 [ millionth ] microsecond [ µ s ]
0.001 [ thousandth ] millisecond [ ms ]

I have borrowed a blog snippet to help explain. Reference: https://www.arl.wustl.edu/~lockwood/class/cs306/books/artofasm/Chapter_17/CH17-3.html

17.4.2 The Timer Interrupt (INT 8)

The PC's motherboard contains an 8254 compatible timer chip. This chip contains three timer channels, one of which generates interrupts every 55 msec (approximately). This is about once every 1/18.2 seconds.

By default the timer interrupt is always enabled on the interrupt controller chip. Indeed, disabling this interrupt may cause your system to crash or otherwise malfunction. At the very least, you system will not maintain the correct time if you disable the timer interrupt.

17.4.7 The Real-Time Clock Interrupt (INT 70h)

PC/AT and later machines included a CMOS real-time clock. This device is capable of generating timer interrupts in multiples of .976 msec (let's call it 1 msec).

The real-time clock (RTC) interrupt (INT 70h) frequency is determined by the hardware. Here are a few statements I found on various web sites.

  • 1024 times per second which is .976 milliseconds. 
  • Use any of the motherboard documents from Intel and the specific Read-Time Clock (RTC) type is provided. Such as DS1287 or MC146818
  • The interrupt frequency is thus always a power of 2 between 2 Hz and 32768 Hz.
  • Virtual implementations are available under Hyper-V, VMWare and others

SQL Server uses now uses the interrupt timer gaining stability while maintaining a high level or granularity at approximately 1 millisecond.  You can take the microsecond SQL Server outputs and divide them by 1000 to obtain the millisecond value.

Timer Usage Recap

  • SQL Server 2000: Uses GetTickCount with ~12ms granularity
  • SQL Server 2005: Uses RDTSC with microsecond granularity
  • SQL Server 2005 SP3: Uses interrupt timer with ~1ms granularity
  • SQL Server 2008: Uses interrupt timer with ~1ms granularity

References: timeGetTime, timeGetSystemTime, GetTickCount, Interrupt 70h, Interrupt 8

Bob Dorr
SQL Server Principal Escalation Engineer