Final changes in SQL Server 2005's memory manager

As you would expect in the last several months SQL 2005 went through rigorous testing.  During the period we have identified and put several improvements that you might be interested to hear about.

 

Memory Broker

Analyzing SQL Server's behavior with respect to memory consumption from multiple sources we realized that we have an opportunity to improve dynamic memory distribution even further.  We realized if we put a centralized mechanism that would dynamically distribute memory between buffer pool, query execution, query optimizer and caches, memory distribution would be more efficient and adaptable for different types of loads. So we did it.

 

Final release of SQL Server has a new memory component - Memory Broker. Memory Broker's responsibility is to distribute memory across different components based on their demands.   One can think of the memory broker mechanism as a control mechanism with a feedback loop. The mechanism is simple: Memory broker monitors memory demand and consumption by each component. Based on the collected information it calculates "optimal" memory distribution for each components and broadcasts the information to the components. Each component uses the information to adapt its memory usage accordingly.

 

You can monitor memory broker behavior by querying memory broker ring buffer:

select * from sys.dm_os_ring_buffer

where

ring_buffer_type = 'RING_BUFFER_MEMORY_BROKER'

 

Ring buffer is only updated when memory broker wants behavior of a given component to change: shrink, grow or stay stable.

 

One can use dbcc memorystatus to find out last memory broker notification.

 

NUMA

In my post  https://blogs.msdn.com/slavao/archive/2005/08/02/446648.aspx  I outlined Windows and SQL Server NUMA support and issues associated with NUMA. In the final release we put several improvements that you might want to be aware of.

 

In SQL 2005  final release Buffer Pool will use remote pages if it can't get local. Before it wasn't the case - BP would stop allocating memory from the given node if it can no longer get local pages. Because of that we had to jump through hoops to make sure that BP gets local pages. It is no longer the case. If your application can tolerate latency of using foreign pages, and 99% of apps fall into this category, you don't have to run "warm up" load described in my previous post. 

 

In addition we changed the meaning of min/max server memory setting on NUMA. On NUMA these settings get divided among nodes. For example if you set max server memory to be 16GB on the system with four nodes. Buffer Pool will allocate 4GB of memory on every node. If you take one of the nodes off line by changing SQL Server affinity, max server memory setting will be distributed among remaining nodes. For example in the previous case if you take two nodes offline the 8GB of memory will be evenly distributed amongst remaining nodes. Since Buffer Pool node is capable of using foreign pages, remote memory will be utilized if there not enough memory on remaining nodes. If you want the server not to use memory from the nodes it is no longer running on you will need to decrease max server memory setting as well after moving nodes offline.

 

We have exposed a set of new performance counters that you might find useful. SQL Server::Buffer Node. The set of counters shows a memory used by the Buffer Pool  in every node.

 

Keep your questions coming!