The procedure cache responds to memory pressure like all other cache stores built using the common caching framework. The eviction of cache entries under memory pressure is based on the cost of the entry (plan). When memory pressure conditions are reached, in one clock round zero cost entries are removed from the cache store and the cost of other entries reduced (by a factor of two). This is a standard least recently used (LRU) algorithm.
There two types of memory pressure conditions in response to which entries are removed form cache stores: local and global memory pressure.
9.1 Local Memory Pressure
If a particular cache store grows too big, then it is flagged for reaching local memory pressure limits and entries are removed from this cache store. Entries are not removed from other cache stores. This is done to prevent one cache store from consuming all of the system memory. The conditions for reaching local (internal) memory pressure are described below:
The following table shows how the maximum limit of the plan cache is determined for each version of SQL Server:
SQL Server Version
Cache Pressure Limit
SQL Server 2005 RTM & SP1
75% of visible target memory from 0-8GB + 50% of visible target memory from 8Gb-64GB + 25% of visible target memory > 64GB
SQL Server 2005 SP2
75% of visible target memory from 0-4GB + 10% of visible target memory from 4Gb-64GB + 5% of visible target memory > 64GB
SQL Server 2000
SQL Server 2000 4GB upper cap on the plan cache
If the cache store reaches 75% of the cache store size limit (as per the table above) in single page allocations or 50% of the limit in multi-page allocations, this triggers internal memory pressure conditions and removal of entries from the cache store. On 32 bit systems, cache store size is determined based on the visible target memory (up to 3GB). On 64 bit systems the server memory is determined based on the max server memory settings.
If upon the insertion of an entry into the cache store internal memory pressure conditions are reached, then entries are deleted from the cache store on the same thread synchronously. This may cause the response time of that query that was inserted in to the cache store to be large.
The default number of entries updated in one clock hand move (before we re-check to see if internal memory pressure still persists) is 16 when internal memory pressure is signaled. We may remove up to the number of updated entries if all those entries were zero-cost. Upon removing 16 entries from the cache store if internal memory pressure conditions still persist, in the next notification 32 entries (double the number of entries removed in the previous notification cycle, up to a maximum of 1024 entries) are removed.
Another condition that signals internal memory pressure for the cache stores is if the number of entries in the cache store becomes greater than 4 times the hash table size. The hash table size can be obtained using the following query:
select buckets_count from sys.dm_os_memory_cache_hash_tables
where type in (‘CACHESTORE_OBJCP’ , ‘CACHESTORE_SQLCP’, ‘CACHESTORE_PHDR’, ‘CACHESTORE_XPROC’)
On a 32 bit system, if the SQLCP cache store had more than approximately 40,000 entries or on a 64 bit system the cache store had more than 160,000 entries, this would trigger internal memory pressure conditions on the cache store and removal of entries. Examine the sys.dm_os_memory_cache_clock_hands DMV to see if the internal clock hand is moving for the SQLCP cache store.
select * from sys.dm_os_memory_cache_clock_hands where type = ‘CACHESTORE_SQLCP’
9.2 Global Memory Pressure
When the system reaches external memory pressure conditions, then entries are removed from all cache stores based on their cost. There are several conditions that can signal external memory pressure as described below:
Global External Memory Pressure: This condition occurs when there are several processes including sqlservr.exe running on the machine, and the operating system determines that sqlservr.exe needs to shrink its memory consumption. Since this is a global condition all cache stores are shrunk in response to the signal from the operating system.
Global Internal Virtual Memory Pressure: This condition is more likely on 32 bit systems where there is limited virtual address space. If there are a large number of multi page allocations made, it is likely that we may be low on available virtual address space. This shortage of virtual address space gets propagated to the resource monitor, which in turn signals all cache stores to shrink.
Global Internal Memory Pressure: There are 2 conditions when memory broker signals global internal memory pressure based on single page allocations only:
- If the predicted usage of memory by all cache stores (as predicted by the cache memory broker) reaches 80% of the visible memory, then global internal memory pressure is signaled. In response to this condition, entries are removed from all cache stores.
- The memory broker maintains an account of how much memory is allocated to cache stores, query optimizer and for query execution (reserved pages). It also maintains the rate of growth of each of these components and predicts future allocations (for an interval of 4 seconds). The overall target memory allocation should be less than 80% of the visible buffer pool at all times. Therefore if the predicted and the current usage is less than this over all target, then all components including caches can continue to grow. New targets for growth are computed for each component and broadcast. If the current memory usage of that component is higher than the new target, a shrink notification is sent. If the current memory usage of that component is lower than the new target, that component can continue to grow cautiously keeping the target value in perspective. Consider for example if the cache stores are big in comparison to the memory allocated for query optimizer or query execution, and the projected rate of growth of the query optimizer is high, and that of the cache stores is low, then the cache stores will be shrunk to make memory available for the query optimizer.
The default number of entries updated in one notification cycle is 16 for every cache store when external memory pressure is signaled. If the updated entries are zero-cost then they are removed.
If one of the cache stores is empty, then the number of unused entries is added to a global pool of unused entries. We would have had to update up to 16 entries if the cache store was not empty. These entries that we did not have to update because the cache store was empty are referred to as unused entries and are added to a global pool of unused entries. Now when we encounter the next non-empty cache store, we can update up to 16 plus the number of number of unused entries available in the global pool of unused entries. In other words we apply the gain from not having to delete entries from an empty cache store to the next cache store that has entries to be removed. For example, if there was one empty cache store followed by a large non empty cache store, the global pool of unused entries would have 16 entries now. Therefore we can update up to 32 entries (16 + 16 unused from global pool) in the next non empty cache store. This helps prune large cache stores faster especially in scenarios where there are several small cache stores, and one large cache store.
Note that both under internal and external memory pressure conditions there is no mechanism to free memory from entries that are currently in use or remove these entries entirely. However it is still possible that these entries may have memory that can be returned to the system under memory pressure conditions. The two compiled plan cache stores (SQLCP and OBJCP) have compiled plan entries that have dependant objects such as MXCs, XStmts and cursors. These two cache stores implement a mechanism to reclaim half the memory from these dependant objects (MXCs, XStmts and cursors) even if these entries are in use. Note that the dependent objects are re-generate-able cache entries, and are fairly inexpensive to regenerate compared to compiled plans. It is therefore not as valuable to keep them in cache especially under memory pressure conditions.