3.0 Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2

 

3.1 Costing algorithm change between SQL Server 2000 and SQL Server 2005 RTM

 

In SQL Server 2000 and 2005, the cost of a query depends on 3 factors: number of disk IO requests, number of context switches during query compilation (in 4 ms quantums) and the number of memory pages, both from single and multi-page allocations. In SQL Server 2005, the original cost of the query is a power of two, the exponent being the sum of the contributions of disk IO’s, context switches and memory pages to the cost.

 

Mathematically

Cost = 2 min (IO > 0 ? (IO - 1) / 2 + 1 : 0, 19) + min (CS > 1 ? (CS - 1) / 2 + 1 : 0, 8) + min (MP / 16, 4)

 

Where

IO - the number of IO requests (regardless of the number of bytes involved),

CS - the number of context switches, i.e. the number of 4 ms quantums,

MP - the number of memory pages (from both single and multi page allocations)

 

The contribution of context switches is as follows: a) for queries with zero or one context switches, the contribution is zero. For queries with number of context switches two, the contribution is one. For every two context switches over two, the contribution increases by one up to a maximum of eight. The disk IO’s contribution is zero if the number of disk IO’s is zero. For every two disk IO’s over zero, the contribution increases by one up to a maximum of nineteen. The contribution from memory pages is the number of pages divided by sixteen up to a maximum of four. To illustrate with an example: if a query has zero disk IO’s, four context switches, and 2 memory pages, then original cost is 2 (0 + 2 + 0) = 4

 

 

SQL Server 2005 the cost computed is directly proportional to the memory pages for the plan, where as in SQL Server 2000 the cost computed is inversely proportional to the memory pages. Therefore queries that take up a large number of memory pages are not cached in SQL Server 2000, but will be cached in SQL Server 2005. On an upgrade, for a workload with large queries and limited reuse this will result in dramatic increase in plan cache size.

 

3.2 Improvements made to Plan Cache behavior in SQL Server 2005 SP2

 

  1. Reduce contention in the creation and eviction of plan cache entries: In SQL Server 2005 RTM and SP1, there is a single allocator for all descriptors of compiled plans of dynamic queries. The descriptor is the part of the compiled plan that must be created to add the compiled plan to the plan cache and it must be destroyed to evict the compiled plan from the plan cache. The contention on this single allocator can cause as much as a 40% drop in application throughput or, in more severe cases, an Out of Memory error. In SP2, we partition the allocator per CPU to reduce this contention, thus increasing throughput and eliminating the Out of Memory error.

 

  1. Implemented fairness policy in evicting cache entries across all NUMA nodes: For servers with non-uniform memory access (NUMA) configuration (Hardware-NUMA or Soft-NUMA), there will be multiple NUMA nodes and different caches will be placed on different NUMA nodes. SQL Server responds to memory pressure by evicting cache entries. In SQL Server 2005 RTM and SP1, this can cause small caches on one NUMA node (such as the metadata cache) to go through multiple rounds of eviction while large caches (plan cache for instance) on a different NUMA node have yet to finish one round. As an example, entries in the small metadata cache have to be constantly recreated thus causing contention on the schema modification lock. This contention leads to significant drop in throughput. In SP2, we implemented a fairness policy in evicting cache entries across all NUMA nodes, thus avoiding the drop in throughput.

 

  1. Align SQL Server 2005 plan cache size limit to a size similar to that of SQL Server 2000: The maximum size limit of the SQL Server 2005 Plan Cache in RTM and SP1 is significantly larger than in SQL Server 2000. To reduce the SQL Server 2005 plan cache maximum size limit to a size similar to that of SQL Server 2000, the limit for signaling internal memory pressure on a cache is changed from that of SQL Server 2005 RTM and SP1. The following table shows how the maximum limit of the plan cache is determined for each version of SQL Server:

 

SQL Server Version

Internal Memory Pressure Indication in Cachestore

SQL Server 2005 RTM & SP1

75% of server memory from 0-8GB + 50% of server memory from 8Gb-64GB + 25% of server memory > 64GB

SQL Server 2005 SP2   

 

75% of server memory from 0-4GB + 10% of server memory from 4Gb-64GB + 5% of server memory > 64GB

SQL Server 2000

SQL Server 2000 4GB upper cap on the plan cache

 

Example:

For a SQL Server with 32Gb total SQL server memory, SQL Server 2005 RTM and SP1 cachestore limit will be 75% X 8 + 50% X (32 - 8) = 18GB

SQL Server 2005 SP2 cachestore limit will be 75% X 4 + 10% X (32-4) = 5.8GB

 

Limiting the maximum size of the plan cache will ensure enough room for the database pages, and therefore improved throughput.

 

Note: Another internal memory pressure indication in the plan cache cachestore is 40K or more entries in the cachestore for 32 bit machine and 160K entries for 64 bit machine.

           

  1. Decrease the number of plan cache entries by not caching some zero cost plans: In SQL Server 2005 RTM, SP1 and SP2, the cost of a compiled plan is directly proportional to the size of the compiled plan, the IO and CPU cost to compile the plan. A compiled plan is considered zero cost if the cost is below a certain threshold. In SQL Server 2005 RTM and SP1, zero cost compiled plans for dynamic batches containing at least one set (option) statement and/or at least one transaction statement (begin/commit/save/rollback transaction statement) are cached. In SP2, these plans are no longer cached. The only exceptions are:

 

a. For the set (option) statement, if all the statements in the batch are set (option) statements, SP2 will still cache the batch even if it is zero cost.

b. For transaction statements (begin/commit/save/rollback transaction statements), if all the statements in the batch are IF statements and/or transaction statements (begin/commit/save/rollback transaction statements), SP2 will still cache the batch even if it is zero cost.

 

This change in SP2 reduces caching zero cost plans for dynamic batches that are almost never reused due to the randomness of parameters while allowing reuse of batches that are likely to be same.