Yukon caching vs SQL2000
What makes Yukon memory management different from SQL2000 is elaborative caching framework. In SQL2000 there are two major caches: data page cache, called Buffer Pool and procedure cache, cache of query plans. Buffer Pool and procedure cache are very tightly coupled. For example procedure cache relies on buffer pool's eviction mechanism to control its size, even though both have separate costing policies. Having BP and procedure cache tied together significantly simplified caching machinery in SQL2000 when we had to deal with hard to solve problems.
This model worked perfectly in SQL200 but it didn't work for Yukon. With addition of new features and new requirements in Yukon a number of caches we have to support exploded. Tying all caches to the Buffer Pool became not only problematic but not even feasible. It became obvious that we need to create common caching framework. The key idea behind the framework is uniform mechanism and the common costing policies.
Common caching framework
To cache different type of data SQLOS implements common caching framework. There are several types of cache mechanisms: Cache Store, User Store and Object Store. Each store has its own properties and hence its usage. User Store is a bit awkward name for the cache but once I describe its usage it will make more sense to you.
Before we jump into further description of stores I would like to explain a difference between meanings of caches and pools. In SQLOS's world, cache is a mechanism to cache heterogeneous type of data with a given cost for each entry. Usually there is a given state associated with an entry. A cache implements lifetime control of an entry, its visibility, and provide some type of LRU policies. Depending on the type of the data cached each entry could be used by multiple clients at the same time. For example SQL Server procedure cache is a cache in SQLOS' s terms. A plan's lifetime, visibility and cost is controlled by SQLOS's cache mechanism. Each plan can be used by multiple batches at the same time.
In its turn, pool, in SQLOS's terms, is a mechanism for caching homogeneous data. In most cases cached data doesn't have neither state nor cost associated with it. A pool has limited control over lifetime of the entry and its visibility. Once an entry is taken from the pool it is actually removed from it and pool no longer has any control over the entry until the entry is pushed back to the pool. Only one client can use an entry at the time. An example of a pool is a pool of network buffers: no state, no cost , all buffers of the same size. Keep in mind SQL Server's Buffer Pool is a cache in SQLOS terms. Currently it doesn't use any of SQLOS's caching mechanism
Cache Store and User Store
Both stores are actual caches. They are very similar. Former implements support for hash tables and latter requires users of the cache, developers leveraging the framework, to implement their own storage semantics - hence is the name User Store.
| Cache Store | | User Store |
--------------\ / ------------
| Clock Algorithm |
| Clock Hands |
|Clock Entry Info|
Conceptually there are two major controls in SQLOS caches - life time control and visibility control. Life time control provides life time management of an entry. Visibility control manages visibility of an entry. It is important to understand that entry in a cache can exists but might not be visible. For example if cache is marked for single use only, entry won't be visible after it is given a way. In addition entry might be marked as dirty. It will continue exist, live, in the cache but won't be visible to any look up.
Life time of entries is controlled by store's mechanism themselves. In case of Cache Store the lifetime is fully controlled by SQLOS's caching framework. In case of User Store entrees' lifetime is only partially controlled by a store. Since user implements its own storage user's mechanism also participate in lifetime control. For both stores entrees' visibility is controlled by the caching framework.
Lifetime of an entry is managed by embedded reference count in Clock Entry Info. Once this count goes to zero, an entry will be destroyed. In case of User Store only Clock Entry Info but not the actual data will be destroyed.
Visibility of an entry is implemented by pin count embedded into Clock Entry Info. Keep in mind that pin count and reference count are different mechanisms. One manages lifetime and the other manages visibility. For an entry to be visible its pin count needs to be visible, have value larger than 0. It also need to be non dirty and not marked for single usage. Pin count of one means that entry is visible and currently is not in used.
Cache store's mechanism implements its own storage - hash tables. A single cache store can have several hash tables. This is very useful when cache store users need support for different types of lookup. For example SQL Server's procedure cache leverages this functionality to enable procedures' look up by name or by id. Lookup process in different hash tables is independent of each other and doesn't conflict with clock hand movement described below.
SQLOS's caching framework implements LRU policy to control visibility and lifetime of entries in the cache. It simulates LRU by implementing clock algorithm. Clock algorithm object is embedded both in Cache and User Stores. Currently it contains two clock hands - internal and external. An external clock hand is moved by Resource Monitor when the whole process gets into memory pressure. (http://blogs.msdn.com/slavao/archive/2005/02/19/376714.aspx)
Internal clock hand is used to control a size of a cache relative to other caches. You can think of internal clock hand as a way to put a max cap on a single cache. If this mechanism didn't exists then it would be possible for a single cache to push the whole process into memory pressure. For example if you execute numerous ad hoc queries, they can be cached. Without having internal clock hand they would force the whole SQL Server's process into memory pressure. To avoid such situation internal clock hand will starts moving once framework predicts that procedure cache's max cap is reached.
Clock hand's movement doesn't interfere with store usage. Every time clock hand steps on not-in-use entry it divides its cost by 2 If entry is not in use and its cost is zero, a clock hand will first make entry invisible and then will attempt to remove it from the cache. The remove process can fail if there is another clock hand currently working with the same entry. Once both of clock hands move away from the entry, it will be removed.
It is possible that in a future we could add more clock hands so that we can get better control over a single cache or group of caches.
Object store is a simple pool. It is used to cache homogeneous type of data. Currently it doesn't have any costing associated with its entries. It implements max cap to control its size relative to other caches. In addition on ever memory pressure notification by Resource Monitor, object store will remove preconfigured amount of entries. It is possible to implement more sophisticated algorithms in the future. For now we would like to keep things as simple as possible.
Store DMVs and Size control
Yukon's dmvs enable you to observe cache behaviors in action. In Beta2 there were few tables that exposed information about stores: sys.dm_os_memory_caches and sys.dm_os_memory_pools.
In Beta3 you get it much more:
sys.dm_os_memory_cache_counters- provides you with summary information for every store - amount of memory used, number of entries, number of entries in use and etc…
sys.dm_os_memory_cache_hash_tables - provides you with information about cache store's hash tables - max, min and average bucket length and etc..
Sys.m_os_memory_cache_clock_hands - provides you with information about clock hands for every cache and user store - hand is running, number of rounds, amount of entries removed and etc…
These tables are very useful in understanding, monitoring and tuning behavior of a single store and SQL Server overall. For example if you observe permanent hands' movement you might improve server performance by finding ways to stop them.
In my next post I will try to analyze different OOM errors. I will also take look into ways of avoiding getting server into different types of pressure. Stay tuned!
I hope this post was useful to you. Please let me know your comments.