ASP.NET - how it uses Windows memory

Sooner or later you are bound to run into the dreaded OutOfMemoryException and wonder, "why me, why now?".  ASP.NET and other web applications are particularly susceptible to high memory consumption because a web application is typically serving 100s or 1000s of users all at once, unlike your typical desktop application which is just serving one user. 

How big is big?

The code running inside usermode processes are provided with the illusion that they are in their own private world of memory. This illusion is provided by the Windows Virtual Memory Manager.  Code can request a chunk of this virtual address space through a call to the VirtualAlloc API and it is handed out in chunks of 4Kb.  By default a usermode process "sees" 2Gb of address space. The other 2Gb is reserved for the kernel. This 2Gb is a pretty big place. If every pixel on my screen of 1024 x 768 pixels represents one of these 4Kb pages, 2Gb is about 2/3 of my entire screen. But in these days of data hungry web applications it is surprising how easy it is to run out.

If a system is booted with the /3Gb switch in boot.ini (only supported on Enterprise and Data Center editions of Windows 2000, and all versions of Windows XP and Windows Server 2003) a process that is linked with the /LARGEADDRESSAWARE switch can "see" 3Gb. Aspnet_wp.exe is linked in that way in version 1.1 and can take advantage of that. However you have to be careful booting a system with /3Gb as this reduces the amount of memory available to the kernel which is not always appropriate. It depends what the server is used for and what else runs on it.  And that is your lot. 3Gb is all you are going to get unless you make the switch to the 64-bit world.  If you are running on x64 Windows Server 2003, even if you stick to 32-bit worker processes you can still get a bit extra. Usermode now gets access to a full 4Gb of addressable space and the kernel bits are neatly tucked away out of sight. This is dicussed in this rather intersting article about how the folks at Microsoft.com switched to x64.

Whether the process "sees" 2Gb or 3Gb, there are many things that use this virtual address space. For example each DLL loaded into the process, thread local storage, stacks, native heaps and directly allocated virtual memory blocks all occupy parts of this virtual address space. Out of what is left, the .NET runtime has to allocate virtual address ranges for use by the managed heap. So in a best case, you could expect to get about 1.5Gb or 2.2Gb of managed objects allocated in a process (on standard and /3Gb booted systems respectively).

Fragmentation

Virtual memory within a process always gets fragmented to a greater or lesser extent. The free space is never all together at one place. And virtual address space fragmentation is not like disk fragmentation. You cannot just run a defragmentation tool to clean things up (despite what people will try to sell you on the internet!).  Since native code works with absolute pointers to memory allocations you would have to inform everyone that has a pointer to the block if someone moved it. That just isn't going to happen. (The .NET garbage collector does do this for managed memory but that's the point - it's managed. Virtual memory is the Wild West of memory.)

Certain types of memory allocation require a certain minimum size of free block to succeed. For example the .NET runtime generally reserves virtual address space in 64Mb blocks. So it is possible to be in a situation where there are maybe 100s of Mb free but no single block bigger than 64Mb. In this situation if the .NET runtime happens to need to extend the reserved space for the managed heap (due to memory allocation pressure from the application) an OutOfMemoryException will occur.

If you want to see all those various virtual memory allocations just attach WinDBG and run the !address command. This will output details of every virtual memory allocation in the process. If you haven't got time to read all of them just add -summary to the command and you'll get the high level overview of how much is committed, free, used for native heaps and what is the size of the largest contiguous free block.

Heaps

Now 4Kb allocations are pretty inefficient for most application purposes. If you want to allocate memory to store the word "Beans" as ASCII text then you only need 6 bytes including a null terminator so why would you want 4Kb?  99.8% of the memory would be wasted. And if you then wanted to allocate memory for "Cheese" you would need another 4Kb allocation. Beans and Cheese are both good but are they that good?

Enter the concept of heap managers.

The idea of the heap manager is to act as a broker between the application code and virtual memory. It's kind of like borrowing money. If you want to borrow £5,000 to buy a car and you go out to the international money markets to borrow it, the financiers are going to turn around and say "Sorry, but we only lend in traunches of £10,000,000. So instead you go to your local bank. They've already borrowed the money in chunks of £10,000,000 and are willing to do the administration and hand it out in smaller loans to people like you and me.  Well, sometimes.

Windows provides a heap manager which is used for memory allocation by 99% of non-.NET applications whether they know it or not. When a C++ application calls 'new' this is usually compiled down to a call to HeapAlloc which is the API by which you ask the heap manager for a chunk of memory. I sometimes refer to this as the native heap to distinguish it from the .NET garbage collected heap.  Most .NET application processes are actually using native heap as well. Even the CLR uses a little bit of native heap for some of its internal structures. Also if your application interoperates with things like COM components or ODBC drivers then those components will be mapped into your process space and will be using native heap for their memory allocation.

The managed objects of your application are of course stored in the .NET managed heap, a.k.a the garbage collected heap. The CLR allocates address space directly with calls to VirtualAlloc. It does this (mostly) in chunks of 64Mb, commits it on an as needed basis and then efficiently subdivides that into the various managed objects the application allocates.

Damage limitation

Sometimes the use of memory in ASP.NET applications gets out of control   All at once 50 users decide to request a 50Mb report that none of the developers ever anticipated would exist. And they only ever tested it with 10 users anyway. Or a bug in code hooks up an object instance to a static event handler and before you know it you've leaked all your precious memory.

To combat this, ASP.NET has a governor mechanism built in. This regularly monitors the approximate total committed memory in the process. If this reaches a specified percentage threshold of the physical RAM on the system ASP.NET decides to call it a day on that particular worker process instance and spins up a new one. New requests are routed to it and the existing requests in the old one allowed to come to a natural conclusion and then the old worker process is shutdown. This threshold is set via the memoryLimit setting in machine.config.

If the webGarden setting is set to false then only one aspnet_wp.exe process runs and the memoryLimit setting is interpreted as the percentage of physical RAM at which ASP.NET will proactively recycle the worker process.  However if webGarden is true then the number of CPUs is taken into account (as you would get as many worker processes as you have CPUs).

Therefore if you have 4Gb of RAM and memoryLimit="70", this means that private bytes for the aspnet_wp.exe process would have to reach 4 * 0.7 = 2.8Gb before ASP.NET initiated a recycle.  If you had webGarden ="true", if you had 4 CPUs the calculation would be 4 * 0.7 /4 = 700Mb.  If you had one CPU it is actually very unlikely you will ever reach that threshold because you are likely to have run into an OutOfMemoryException due to address space fragmentation long before you managed to commit 2.8Gb of allocations. So you need to think carefully about what memoryLimit should be set to based on how much RAM you have, how many CPUs you have and whether you have enabled webGardening. In IIS6 things are somewhat different as IIS takes over the performance and health monitoring of the worker processes.

You do not really want to get to the stage where OutOfMemoryExceptions start occurring.  If you do it is unlikley that the application will recover. What will most likely happen is that a bit of memory will be recovered allowing things to continue for a while but then another allocation will fail at some random place and the worker process will become unstable. Ultimately it will become unresponsive and be deemed unhealthy by either IIS or the ASP,NET ISAPI filter and be terminated. 

A better option would be if the OutOfMemoryException were treated as a fatal-to-process situation and the process were immediately terminated. With .NET 1.1 and later this is possible if you set the GCFailFastOnOOM registry value to a non-zero value. ( [Updated 30/5/8]   I recommend a value of 5. Any non-zero value used to be sufficient to trigger a failfast on OOM but changes to the CLR mean that is no longer the case). Note: in this case the process is rudely terminated. In memory state will be lost, but hopefully your application puts anything important in a transacted database anyway. As soon as the process is gone, ASP.NET or IIS health monitoring will spin up a new one.

These mechanisms for automatic recycling of the worker process in high memory situations are really just stop gap measures until you get around to figuring out why your application uses so much memory in the first place or why it uses more and more as time goes on. The answer to those two questions is another story entirely...