When memory is running low…


When I say memory I mean physical memory. Let’s assume that you have enough virtual memory space. When the physical memory gets low you may start getting OOMs or start paging.


You can experiment and see how you can avoid getting into the low memory situation but sometimes it’s hard to predict and hard to test all scenarios your app can get into. For example, if your app handles requests where the amount of memory each request consumes can vary dramatically, you may be able to handle 1000 concurrent requests some times before running out of memory but only 800 other times.


First of all, let’s talk about how you know when the memory is low on the machine. You can do this by either calling the Win32 API GlobalMemoryStatusEx, or monitor memory performance counters. Currently there’s no notification sent by the GC for low memory – one reason being the memory is used not only by the managed heap but also by native heaps/modules loaded/etc.


You can get the process info via either Win32 APIs, perf counters or the managed Process class.


To get the amount of memory that the managed heap is using, the most accurate way is to monitor the “# Total committed bytes” performance counter under .NET CLR Memory.


Something that server throttling code usually does is to monitor the memory usage and when the memory consumption gets high it stops accepting new requests and waits for the current requests to drain. When the memory load gets to a reasonably low point it starts accepting new requests again.


But the problem is, if the current requests don’t make any allocations or don’t make enough allocations to trigger a full GC your managed heap doesn’t have a chance to collect dead objects so after the current requests drain the memory consumption does not go down.


What do you do in this case? You can call GC.Collect. This is a legitimate case for calling GC.Collect yourself – but be careful how you call it! There are a few things you should keep in mind when calling GC.Collect:


Don’t call it too often –



  • you want to set a minimum interval of how often you call it. This would depend on your application. Let’s say you have 500MB worth of live objects to collect (meaning the resulting heap size after GC is about 500MB, not counting the garbage), perhaps you want to not call GC.Collect more often than every 5 seconds. You can determine this by doing a simple test on the type of machines that your app usually runs on and see how long it takes to collect certain number of bytes.

  • GC happens on its own schedule so you want to account for that. When it’s the time for you to induce a GC a GC may have already happened during the last interval so it’s not necessary for you to induce the GC. You can check for this by calling GC.CollectionCount(GC.MaxGeneration) after the interval elapses and if the collection count has already increased from the last time you induced a GC you don’t need to induce another one.

Don’t call it when it’s not productive – if after you call GC.Collect and the memory doesn’t drop by much, you want to call it less often. It’s a good measure to check say, if the memory hasn’t dropped by 1% of the total memory you got on the machine. If that’s the case perhaps you want to increase the interval by some percentage.


Recently I worked with a team that implemented just that and got good results.


Some people have asked why we don’t just give you a way to limit the amount of memory that a process is allowed to use. Imagine if we did, how would you use it? Do you set the amount of memory that your app is allowed to use to 90% of the physical memory? 90% sounds good… now, if everyone was to take this suggestion and set their app to use 90% of the memory we are back to square one. If everyone sets it to 50% and if your app is the only active app on the machine you’d be wasting half of the memory.


If your app is running in a very controlled environment, in other words, you are probably using the hosting APIs to gain more control in a managed app, there are hosting APIs you can use to limit the amount of memory that managed heap can use. You can use IGCHostControl::RequestVirtualMemLimit to deny the request when GC needs to acquire a new segment. So you can check for the amount of memory that managed heap uses and if it exceeds the limit you set, you use this API to tell GC to not add new segments. The result of this is there will be a GC trigger trying to get back more memory. If after GC is done there’s still no space to satisfy your allocation request you will get an OOM. Note that this means you need to be prepared to handle OOMs which can be very difficult or even impossible. So to be able to use this API in an appropriate way generally means you’ll need to use other hosting APIs to do things like communicating the memory load to GC (memory load that you defined), tracking allocations and terminating tasks or unloading app domains. Refer to the hosting API documentation for details.


BTW, if you really want to limit the memory for a process, you even have a way to do it via Win32 APIs. You can create a job object via the CreateJobObject call that you associate with your process, and specify the amount of memory the process is allowed to use (or if you have only one process associated you can set the memory limit for the job) via SetInformationJobObject.


Now, if your application simply needs to use a lot more memory than the amount of physical memory, what do you do? Currently (meaning, on CLR 2.0 or earlier) if you need to constantly churning through all this data, it would mean you will incur full GCs often. If a large portion of the managed heap is stored in the page file and would need to be paged in to do a GC, you will take a pretty serious performance hit. It may be worse of the alternative which is storing it as native data and only page in what you need. You will still need to go to the page file very often but you don’t need to pay the price of paging in everything to do a GC. So right now doing caching in native code (in memory or in a file) could be a better choice, provided that the cache implementation has very low fragmentation. Sorry I don’t have a better answer but we are perfectly aware of this problem though and are working hard on it.

Comments (16)

  1. Keep up the good work. I love reading your blog :)

  2. Paul Ming says:

    Assuming there’s plenty of virtual memory address space free, how would low available physical memory cause OutOfMemoryExceptions?

  3. maoni says:

    If you don’t have enough physical storage we will fail when we need to commit memory. We attemp a GC at this point and if afterwards there’s still not enough physical storage to commit we will throw an OOM.

    When you have a page file, physial storage includes both physical memory and the page file.

  4. Anil says:

    What if you have lots of physical memory but not enough virtual address space? How would you detect this situation and resolve it?

  5. maoni says:

    Anil, how we detect:

    Not enough virtual address –> failiure to reserve a new heap segment.

    Not enough physical storage –> failiure to commit

    how we resolve:

    Basically we will try to do with what we have. After that if we are still short on space (to reserve or commit) we will trigger a GC see if we can get enough memory back. If not we will OOM.

  6. Ramakrishna says:

    Is there any side affect of calling

    SetProcessWorkingSetSize(Process.GetCurrentProcess().Handle, -1, -1). When i use this i see that the Mem Usage in the Task Manager drops significantly for the process.

  7. maoni says:

    Ramakrishna, SetProcessWorkingSetSize trims the working set and "Mem Usage" in TaskMgr indicates the process’s working set so that’s why you see it drop a lot.

    This means you are swaping the process out of the physical memory. When you need to use memory you’ll have to swap it back in.

  8. Mark E. says:

    >>

    […]if you really want to limit the memory for a process, you even have a way to do it via Win32 APIs. You can create a job object via the CreateJobObject call that you associate with your process, and specify the amount of memory the process is allowed to use (or if you have only one process associated you can set the memory limit for the job) via SetInformationJobObject.

    <<

    Can you provide an example or a link to an example? My co-worker and I have been working on this problem for some time now and are unable to get a working sample.

    The situation:

    Citrix server with our customer requesting/demanding that the application remain within a specified memory boundary. The program being replaced is not .NET and manages the RAM footprint very nicely. BTW, security of the user session doesn’t allow calls to SetProcessWorkingSetSize if that helps.

    -Mark E.

  9. Mark E. says:

    I did end up finding a solution to the continual memory growth problem that we were experiencing.

    I just wanted to share the solution with other readers who might come across this in the future.

    See my comments on this post (from "Mark E."):

    http://blogs.msdn.com/maoni/archive/2006/08/18/706726.aspx

    -Mark E.

  10. chuck says:

    We are using the sql report viewer with local reports.

    After every report export to pdf memory goes up by 200 meg (for hours). After 4-5 exports IIS crashes and the memory comes back.  MS said this is normal (SRX060712604389) it’s the way GC works.  

    Shouldn’t GC be called automatically if you have half a gig or more of memory that isn’t being used for hours?

  11. maoni says:

    chuck: did they explain to you why "this is normal"? The info you gave is not detailed enough for me to give any feedback.

  12. maoni says:

    chuck, I checked with my PSS contact on your case. It seems that your issue was related to RS, not GC. Our PSS engineer who worked with you didn’t hear back from you after he explained this to you. If you have additional questions, please do not hesitate to contact him again and let him know. Thanks.

  13. There are plenty of resources on the .NET garbage collector, ranging from fairly basic overviews that

  14. roy ashbrook says:

    Ah. Garbage Collection… how I love and hate thee. =P I think one sad thing about programming in .net

  15. nancy brownsville says:

    what makes memory runns out,i dont know much about this kind of thing i need help

  16. Rocco says:

    How the hell do you expect people to understand what you are writing about if they don't understand computer lingo — like me. I can't ask a question without is be answered in com. lingo. I can't read anything without it being in com. lingo.