Q & A: Does SQL Server always respond to memory pressure?

Q: I thought I've seen reference recently to problems with SQL2005 not releasing memory when under pressure, and this was possibly due to the Lock Pages In Memory option. We have a server where this seems to be happening, i.e. SQL2005 uses all available memory and the server basically dies because there's no memory left for other processes.

How do I make it work like SQL2000 where it will release memory when it detects memory pressure from other applications?

A: The memory behavior is slightly different between SQL Server 2005 and SQL Server 2000.  SQL Server 2000 won't respond to memory pressure when running with lock pages in memory option so you will have to configure max server memory setting in such a way that you leave enough free memory to Windows and other applications on the machine (see memory setting recommendations below)

SQL Server 2005 does respond to memory pressure both when lock pages in memory is enabled and disabled. However there is a caveat that you need to be aware of - when OS hits memory pressure, depending on the state of the machine and applications,  it could be really slow - attempting to free up the resources. This means that SQL Server might not get enough CPU cycles to respond to memory pressure quickly and hence causing  other applications and possibly OS to return OOM errors. In case when lock pages in memory is not enabled it is possible that OS will page out SQL Server. This situation might result in inappropriate SQL Server performance.

You can avoid such conditions by setting max memory setting to appropriate value. So what is this value? I usually recommend to derive this value based on amount of RAM available on the machine and amount of available memory you need to keep OS and other applications happy.

RAM Installed        Available Memory (as reported by TM) under heavy load you would want to have

< 4GB                   512MB - 1GB
4-32GB                 1GB - 2GB
32GB - 128GB       2GB-4GB
128GB -                4GB-

There are couple of things you need be aware of:

A. You only need to consider setting max server memory if you expect load on the machine to be memory bound

B. You need to look at the machine during heavy load to come up with appropriate max server memory setting so that you keep available memory in recommended range

C. The more RAM you have on the machine (32GB+)  the more important to have max server memory setting on.

D. My recommendations are "personal" recomendations, you will need to tune your box to find what is the best combination for you.

E. Max worker threads setting does affect how much memory server will require under heavy concurrent load. Consider on x64 platform each SQL Server thread can consume 2MB of physical memory. So if you configure SQL Server for 2000 threads, under heavy load in addition to max server memory it might require additional 4GB of physical memory (2MB * 2000), it is even higher on IA64, i.e 8GB (4MB * 2000)

F. Max server memory setting only controls size of the Buffer Pool and hence doesn't affect allocations going through Multi Page Allocator (MPA). If your load does require MPA you will have to take that into account as well. You can monitor your MPA usage by using sys.dm_os_memory_clerks DMV.

G. Memory allocations requested by external components such as xps, COM (sp_OACreate), and others are not controlled by max server memory setting

Let me know if you have more questions




Comments (11)

  1. Glenn Berry says:


    Another good post. For a dedicated SQL Server 2005 x64 machine with 32GB of RAM, setting Max Server memory to 30GB seems maybe a little low. Wouldn’t you also want to look at metrics like Page Life Expectancy and Memory Grants Pending to decide what to set Max Server Memory at?

  2. slavao says:

    Glenn, we have to be careful here. I specifically avoid recommending setting max server memory to 30GB on 32GB system. You need to make sure that amount of available memory to OS doesn’t drop below ~2GB under heavy load.  

    You are also correct that you need to monitor the above counters to find out right max server memory setting , however if amount of memory available to OS drops you are in danger of bringing the whole machine to its knees.  

    You might think about my recommendation as how to figure out upper limit for max server memory. If you notice based on other counters that SQL Server becomes memory bound and you can’t increase max server memory anymore due to my recommendation, then to improve the situation you might need to buy more RAM

  3. nd-pike says:


    I like to use the Pages Output/sec counter to monitor the hard page faults which are occurring on the system.  As you increase ‘max server memory’, keep a history of this counter.  If you start to note significant increases you may have set ‘max server memory’ a little too high, thereby forcing other user mode processes to page more.  I make gradual increases in the ‘max server memory’ setting, e.g. 512MB or 256MB at a time, before assessing the Perfmon counters against a baseline.

  4. Chris Conner says:

    You should also remember that if your memory is 32GB, that the kernel is going to need 2GB of memory allocation, nevermind just the OS to run. 🙂  

    For machines that have > 16 GB of memory, I subtract 4 GB – 2 for kernel and 2 for OS and applications.

    If I have 16 GB or less, I subtract 3 GB IF I am using the /3GB switch in boot.ini. If I am NOT using the /3GB switch, then the kernel is taking 2GB and since I always leave a buffer of 2GB for my applications and OS on machines that are using less than 16 GB memory that adds to 4GB to subtract instead of 3GB if I use the /3GB switch.

    Like Slavo has said, this is a personal preference. 🙂

  5. Carpe Datum says:

    I’m continuing a series on the Standard Reports you can find in SQL Server Management Studio (SSMS).

  6. There is an almost constant stream of posts on forums asking about configuration of SQL Server 2005 memory,

  7. Welcome to the Dynamics Ax Performance Team’s blog. We’re putting together a team introduction and hope

  8. Here’s something I was looking at yesterday which is worth a quick look. It’s quite a common scenario

  9. ajaymalloc says:

    A good guideline for stting up the server.

  10. ajayciti says:


    When you say " avaialable memeory under heavy load" are you talking abt SQL server load or on OS and avaialable memeory reflected in TM. If it is over all load on OS expecting always more than the recommended available seems to be much hypothetical and also underutilization of resource.

    Do you want to say that after max server memory configuration we need to leave recommended memory for OS, I see it quite feasible.

    Appreciate your help on this. I have a server which has 8GB and out of that 6GB is given to SQL usning Max Server Memory. Most of the time the available memeory is approx 300 to 400MB but few times it has gone even below 50MB. Lock Pages in Memory is enabled. Few other application is also running on the box. SQL Server page life expectency is more than 1500, Free page stall /sec is 0.  No memory G/rants pending. NO CPU spike.

    Only once a slowdown has been reported but there was nothing reported in Event log or SQLServer errorlog.SQL Server has never gone down 6GB.

    Should I still think abt available memory 1GB to be shown in TM.

  11. kdoyle says:

    We’re deploying a warehouse configuration on hundreds of systems.  We’ve found that certain parts of the nightly load process gain a significant benefit from having massive amounts of system cache (for disk I/O buffering of flat-file preprocessing, and no SQL is going on at all), while other parts of the process benefit from having the SQL max server memory set as large as possible (for serious SQL script denormalization processes).   What we’re leaning towards doing is using the sp_configure commands during the load to dynamically reconfigure max server memory in order to entice SQL Server to give up its buffers when we need more system cache.   We could split the difference and just set a fixed amount of server memory, but then both parts of the process will suffer performance degradation.    I know that as of SQL Server 2005, changing max server memory became dynamic (no longer required a server restart), but is it sensible to jockey the parameter around like this?   I wasn’t able to find any other way to get SQL Server to free up the buffers it’s sitting on when we don’t need them anymore, and the system cache will only use RAM that’s available, it does not cause SQL Server to return buffer space just because heavy disk I/O is going on.  And here we have a case that at night, one kind of load goes on, and during the day a completely different kind of load goes on, where we would potentially like to tune the server memory parameters for each set of conditions.   Is routinely tweaking the server memory settings back and forth a reasonable solution to the problem?

Skip to main content