Why adding more memory won’t resolve OutOfMemoryExceptions


Quite often I am met with the incorrect assumption that out of memory exceptions can be resolved by adding more memory. I can understand why you'd think that, but actually it won't matter at all. Additional RAM may increase performance, but an additional 8GB of memory won't increase the available amount of memory for the .NET application.

Here's what you need to know:

If you have a 32 bit machine then you have 2 GB of memory available.

That's the way it is. Never mind if you have 512 MB or 8 GB of RAM. A 32-bit system can address 4 GB of virtual memory. 2 GB of those are reserved for the operating system, leaving 2 GB for each user mode process. If there is no RAM available we will use the page file and save infrequently used memory to disk instead. So additional RAM will cause fewer page faults, but the amount of memory available for the process is still only 2 GB.

(Actually you can tune this in boot.ini. There is a switch that will allow you to steal 1 GB from the OS and increase the limit for the user process from 2 GB to 3 GB. It still requires some additional tweaking though. I suggest looking at http://support.microsoft.com/kb/820108/en-us for more information on this.)

Okay, so now that we know this it's time for the next vital piece of information:

Your process will most likely be out of memory when it reaches ~800 MB

- What!? How is that possible!? Didn't I just say that we have 2 GB available? I did, but there is another thing you really need to know about memory allocations:

Memory allocations need to be continuous.

If you want to allocate a 100 MB string, then you need 100 MB of continuous space. Memory allocations don't work the way the file system does. If we were talking about files we could save part of the file in one place, another part in another place, etc. but memory != file system. Open up the disk defragmenter and take a look at your hard drive. You may have 100 GB available on your 250 GB hard drive, but how much of that is continuous? The available memory for your process will eventually become just as fragmented as your hard drive. And as your memory becomes more and more fragmented it will be harder and harder to squeeze in those large allocations.

Let's say you and your party of four walk into a restaurant. The restaurant is packed with four seat tables so finding a seat shouldn't be a problem. Unfortunately there is 1-3 persons sitting at each table. So even if only half the seats are taken you are still unable to get your own table. The usher gets an OutOfSeatsException as he tries to seat you. 🙂

The rule of thumb is that you'll begin seeing OutOfMemoryExceptions at around 800 MB. Off course this isn't set in stone. It depends entirely on the average size of the objects you allocate and how fragmented your memory becomes. You might make it up to 1.2 GB or you might see the exceptions already at 550 MB.

 

So what can be done?

Well if you're in the planning stage of a WebService that will load 4 GB images, process them somehow and then return them to the client you'll want to either:

  1. Work with streams and make sure you never load the full image into memory
  2. Set aside the money to invest in a 64 bit system
  3. Rethink

I'm currently writing a bigger post on how the managed heap and garbage collector work, so expect more to come on the subject.

/ Johan

Comments (7)
  1. Doug says:

    Another reason that adding more memory will not cure out of memory exceptions is that it appears that some of the framework methods throw this exception when the real problem is something else.

    I have noticed that passing a malformed GraphicsPath to GDI+ can sometimes cause an out of memory exception when the real cause is a bad argument.

    To be fair, it may have run out of memory but that was not the root cause.

  2. JohanSt says:

    Yes that’s true. For example the Image.FromFile method may throw the exception, as can be found in the documentation:

    http://msdn2.microsoft.com/en-us/library/stf701f5(vs.80).aspx

  3. Garry Trinder says:

    But, isn’t stealing memory from the OS an unsupported hack which might backfire sometime later?

    And, won’t the GC compact the memory to make the free space contiguous? I mean, yeah, it can’t move the mem of other processes, but within the 2 gig limit?

    Just curious, ‘coz you’ll probably forget more about the GC than I’ll ever know 😀

  4. JohanSt says:

    Well yes and no,

    You could call it a hack, but it is a documented and supported approach.

    http://msdn2.microsoft.com/en-us/library/aa366521.aspx

    In regards to compacting the memory: The GC only has 2 GB to play with. Depending on your OS it will reserve segments of 32 or 64 MB. Once a segment is full it will reserve another. These segments will be in any order and they will also be obstructed by dlls, threads, etc.

    The GC will not "defragment" the segments once they’ve been reserved. It would be far too consuming.

  5. Garry Trinder says:

    But then again, I guess your app would have significantly more problems if it’s allocating at that rate 😀

    And, thanks for the link.

  6. I think Johan will explain this in the new post he promised, but to make a long story short, not all memory chunks can be compacted; the GC does its best to collect garbage, release memory and also compact it *when possible*. Simply, there are some situations where this would be far too expensive in terms of performance, CPU usage etc…

    But I think Johan will well explain this with lots of details, so be patient and wait for the new post 🙂

  7. If your application crashes, hangs and deadlocks it will cause/require the application pool to recycle

Comments are closed.

Skip to main content