Some history on the ASP.NET cache memory limits

When v1.0 released, the only OS that ASP.NET supported was Win2k, the only process model was aspnet_wp, and the only architecture we supported was x86.  The aspnet_wp process model had a memory limit that was calculated at runtime during startup.  The limit was configurable (<processModel memoryLimit/>) as a percentage of physical RAM, and the default was 60%.  This limit prevents the process from consuming too much memory, especially in the face of a memory leak caused by user code, and allows the process to gracefully recycle.  We also had a feature known as the ASP.NET cache, which allows you to store objects with various expiration and validation policies.  The ASP.NET cache had built in logic that would drop entries when private bytes became too close to the private bytes memory limit for the process.  The actual percentage at which the cache began to drop entries is an implementation detail, and it is different on different hardware.  Suffice it to say that the cache dropped entries when memory usage approached the process memory limit.

This default limit of 60% worked okay for machines with small amounts of RAM.  The 60% value was chosen to allow plenty of breathing room in the case where the limit was exceeded, since when that happens the new process is created before the old process has completely drained existing requests.   Stress runs showed that memory limits higher than this resulted in too much memory paging during process recycles.  However, there was still a problem on boxes with large amounts of RAM.  For example, a box with 4GB of RAM had a default memory limit of 2.4GB (60%) by default.  This obviously doesn’t work given that the user mode address space is only 2GB.  Furthermore, ASP.NET apps typically had a very fragmented virtual address space.  We often saw apps throwing OutOfMemoryExceptions when virtual bytes reached about 1.5GB.  We found through experimentation that on x86 with a 2GB user-mode virtual address space, a conservative private bytes limit of 800 MB worked for most people.  We began recommending that people use this as a cap on private bytes.  Of course some applications could go beyond this, but if you wanted to play it safe, 800MB was a good limit for private bytes.

In v1.1, we also supported WS03.  WS03 used a different process model (w3wp).  This process model gets its private bytes limit from IIS configuration (“Maximum Used Memory” on Recycling tab of Application Pool properties in IIS Manager), not the aforementioned <processModel memoryLimit/>.  Unfortunately, this limit had no default (it was not set by default).  So if the application used the ASP.NET cache, we would never drop entries and eventually you would start seeing OutOfMemoryExceptions.  These are non-recoverable, and required human intervention since the process would typically stay up and serve responses with a nicely formatted OutOfMemoryException error page from that point forward.

In v2.0, we fixed this by exposing new configuration for the cache <caching><cache privateBytesLimit/><caching>.  Now the cache could have a memory limit independent of the process memory limit.  For backward compatibility, we also applied the process memory limit if it was set.   Unfortunately, this complicated things a bit, and the way we calculate the cache memory limit is hidden to the user.  If you don’t set a cache or a process memory limit, we calculate one for you.   If the user mode address space is 2GB, we use MIN(60% physical RAM, 800MB).  If the user mode address space is greater than 2GB and the process is 32-bit, we use MIN(60% physical RAM, 1800MB).  And for 64-bit processes, we use MIN(60% physical RAM, 1TB).  That’s what happens if you don’t set any limits.  However, if you set both a cache memory limit and a process memory limit, we will use the minimum of the two.  And if you only set one, we will use the one you set.  Confused?  You’ll be happy to know that the actual limit we use is exposed by the property Cache.EffectivePrivateBytesLimit.

While 60% may not work for boxes with 1TB of RAM, this value is configurable.

Enough about private bytes.  The cache also has a physical memory limit that was introduced in v2.0.  It was introduced because the garbage collector (GC) becomes very aggressive in low memory conditions.  If the cache is consuming a bunch of memory and inducing the low memory condition, then it needs to release entries to alleviate the pressure on the GC.  In 2.0, the cache dropped entries when available memory was <= 11%.  We later discovered this was too aggressive, and have backed it off in 2.0 SP1 so that now we can use much more physical memory before dropping entries.  The actual limits that we use are an implementation detail, and they are different for different hardware.

The v2.0 SP1 cache work was requested as a QFE by  The KB article for this is at  Anyone using the v2.0 ASP.NET cache should install this QFE.  It will of course be included in v2.0 SP1, when it is released.

The cache memory manager should not be the primary eviction mechanism.  It is better to use expiration policies on the entries, so that they expire  before encountering memory pressure.  Most of the issues surrounding memory stem from the fact that the ASP.NET cache is not able to detect how much memory it is using.  It knows the number of entries, but not their sizes.  It uses Private Bytes for the process and available physical memory for the machine to determine when to drop entries, even though the cache may not be the cause of the memory pressure.  I suggest thinking of the cache memory manager as a safety net or fallback, and using expiration policies or other forms of validation to ensure that your cache entries are removed before encountering memory pressure.  If you only have a handful of cache entries this is not really an issue, and you can rely on the cache memory manager.  But if you’re inserting unique entries on a per-request basis, or if you just simply have a very large number of entries, it makes sense to use expiration and/or validation policies.

Comments (15)

  1. I have already mentioned the values for Maximum Memory used and Maximum Virtual Memory we recommended

  2. I have already mentioned the values for Maximum Memory used and Maximum Virtual Memory we recommended

  3. There is a hidden value in configuring the overlapped recycling values. I was lucky enough to gain the

  4. There is a hidden value in configuring the overlapped recycling values. I was lucky enough to gain the

  5. There is a hidden value in configuring the overlapped recycling values. I was lucky enough to gain the

  6. PaulJones says:

    ASP.NET Cache has many limitations associated with it. It is in-process and standalone, which means scalability will always be an issue. These days distributed caching is the talk of the town and for good reason. It provides the scalability, availability and performance that a high end application needs to perform at its peek. I recommend everybody look into this amazing phenomenon. Even MS has jumped into this arena with a product of their own. The future of high performance computing is distributed caching.

    All the best!

  7. richardfremmerlid says:

    I’ve been examining this issue quite a bit over the past few days.  We have a variety of 32bit servers here ranging from 1GB of memory to 4GB of memory.  

    Code that works fine on our 1GB servers with Windows2003 would fail on machines with 3GB or 4GB with outofmemoryexceptions.

    Based on what you’ve mentioned in this article.  I have tried modifying the application pool memory recycling options within IIS and I’ve also tried to modify the <cache> settings within web.config.

    Here is a summary of my findings:

    1. You can specify the physical memory limit for processes in 2 places

    a. IIS application pool memory recycling (IIS Metabase)

    b. Web.config



    disableMemoryCollection = "false"

    disableExpiration = "false"

    privateBytesLimit = "629145600"

                   percentagePhysicalMemoryUsedLimit = "20"

    privateBytesPollTime = "00:02:00" />


    2. If you set neither of these values default calculations will take place based on your hardware configuration

    a. For 32bit machines with <= 2GB of memory the value for the private bytes limit will be set to 800MB*

    b. For 32bit machines with > 2GB of memory the value for the private bytes limit will be set to 1800MB**

    i. *The value set for private bytes or in the memory recycling used memory value will actually be set internally to 90% of whatever you set.  So the effective value for 800MB would become 720MB

    ii. ** this is what the developer said but in my testing I’ve found it sets it to 800MB in both cases.  The calculations are not done correctly.

    iii. The default value for percentagePhysicalMemoryUsedLimit is 98%

    3. If you set values in both places the lower value specified will take precedence.  If I set 600MB in IIS and 500MB in the web.config file for instance

    4. These problems typically occur to the way memory is managed on the large object heap.  Memory on the large object heap is not recycled until the process itself is killed or recycled.  This is problematic if the process in question is still needing to reference objects that were in memory prior to the recycling.   If the memory recycling value is reduced instead of OutOfMemoryExceptions you will start to see NullReferenceExceptions.

    5. For the larger memory machines I’ve tried setting the defaults to be 800MB for the used memory in IIS and I’ve tried the virtual memory setting of 1500MB.  Which should match the defaults on the 1GB machines.  Unfortunately doing so doesn’t reflect the same results as IIS 6.

    6. To test the effective memory settings I created a simple web application with the following code that I used during testing C#

                   Cache instance2 =  HttpContext.Current.Cache;

                   long effectivepercentage = 0;

                   long effectivebytes = 0;                

                   effectivepercentage = instance2.EffectivePercentagePhysicalMemoryLimit;

                   effectivebytes = instance2.EffectivePrivateBytesLimit;

                   lblMemoryAvailable.Text = "Cache Settings: <BR>EffectivePercentagePhysicalMemoryLimit=(" + effectivepercentage.ToString() + ")<BR>EffectivePrivateBytesLimit=(" + (effectivebytes/1024/1024).ToString() + "MB)";

    Is it possible to get some more testing done of these settings.  The recommended settings don’t match the results of 1GB servers which I can prove.  Any additional insight or guidance is appreciated.

  8. richardfremmerlid says:

    Additional note:

    I’ve tried modifying the effectivepercentagephysicalmemorylimit to match what is on a 1GB of memory. If it was 60% of 1GB, I also tried 600MB/4000GB or 600MB/3500GB to match the same amount of memory that way also.

    To me if these settings are done to match the lower memory servers, the testing results should match on each but they don’t.  Moving to 64bit would resolve the issue, but that isn’t a feasible option for us at this point.

  9. Richard wrote, "Code that works fine on our 1GB servers with Windows2003 would fail on machines with 3GB or 4GB with outofmemoryexceptions."

    Richard, there is a bug in the v1.1 .NET Framework that causes ASP.NET to incorrectly handle the IIS 6.0 worker process (w3wp.exe) memory limit when it is 2GB or greater.  To work around this, I think you can explicitly set the memory limit so that it is below 2GB.  For a fix, please contact Microsoft Support or contact me via the Contacts page on my blog and I’ll put you in contact with Microsoft Support.

  10. JulienN says:

    Thanks for this clarification about memory usage in ASP.NET ! Very usefull.

    We have issues with OOM on several websites.

    We have several application pools and sometimes they throw OOM.

    I think that 32 bits system reach a memory limitation for a w3wp.exe process.

    Our web server is Windows 2003 SP1 with 4Gb of physical memory, x86, 32 bits.

    We decreased the virtual memory size to 1Gb, windows found 5Gb of potentially allowed memory, 4Gb of physical and 1 Gb of virtual memory size.

    I found today one of our websites with OOM exception.

    But i know that a process in windows 32bits can only handle 2Gb of memory by process and i start my research by this.

    I launch processexplorer from sysinternals and i found that one of my w3wp.exe process has a virtual size of 1.9 Gb.

    I don't know how the process can reach the 1Gb limit of virtual memory we fixed …

    I don't configure yet the IIS recycle settings.

    So i suppose that, if the memory limit settings is set ot 60% of available memory, for 4Gb : 60% of 4Gb = 2.4 Gb. 2.4Gb is more than a process can handle…

    So i suppose that OOM is sometimes due to exceeding virtual memory usage that can be handled by a process…

    We test to set the virtual memory size larger than the physical RAM size.  OOM always appears.

    Do you think it is a possible cause of OOM on Windows x86 32bits system ?


    Julien N.


  11. Julien, OOM conditions occur almost exclusively because the process runs out of virtual address space.  On 32-bit systems, the user-mode virtual address space is limited to 2GB.  That's half of what a 32-bit pointer can address.  The other half is used by kernel-mode.  

    OOM typically occurs when the process tries to allocate something but cannot find a contiguous region of memory in which that allocation will fit.  If you're lucky, this won't happen until Virtual Bytes for the process is very close to 2GB, which is what you observed when you saw OOM at 1.9 GB.  However, this can happen much sooner if the address space is more fragmented.  For this reason, for ASP.NET applications running on 32-bit servers with 2GB of user-mode virtual address space, we recommend that you do not allow Virtual Bytes for the process to exceed 1.4 GB.  This gives you plenty of breathing room.

    Because the v2.0 and earlier ASP.NET cache uses the process' private bytes to manage its size, we recommend that you set a private bytes limit on the IIS application pool of no more than 800 MB.  In other words, on Windows Server 2003, set the application pool private memory limit to 800 MB.  If you had less physical RAM, we would recommend that you reduce this value to no more than 60% of physical RAM, but in your case you have plenty of physical RAM (4GB).  

    And just in case there is any confusion, the virtual address space is not the same as your page file size.



  12. John Clamon says:

    hi Thomas,

    i am confused with the private memory limit of IIS 7.0. Help says "…privately allocated system physical memory that can be used by a worker process…". is it really the private working set? i confuse it with "private bytes" which is defined as committed memory so physical + pagefile. also, you talk about private bytes in the article…/ms972959.aspx

    thanks in advance

  13. John,

    The IIS 7.0 "Private Memory Limit" on the Application Pool corresponds to private bytes for the process.  This is the value reported by the "Process(w3wp)Private Bytes" performance counter.  It is also the value reported by GetProcessMemoryInfo in the PrivateUsage field of PROCESS_MEMORY_COUNTERS_EX.  In Task Manager, it is the same as "Memory – Commit Size" on the "Processes" tab–you have to use column chooser to add this, because it is not there by default.



  14. John Clamon says:

    so, "physical" word in the IIS help is unnecessary and fallacious. i have reported this to connect and hope they will correct.…/iis-7-application-pool-private-memory-usage-documentation