.NET Memory: My object is not rooted, why wasn’t it garbage collected?

I got a comment on one of the posts asking this question and I started writing a comment-answer but it turned into a long-winded rant so I decided to blog it instead:)

So you’re looking at a dump and run !gcroot on your object but it doesn’t find a root. Then why is it still around on the heap…

There are many reasons for this but the short answer is:

It was still alive (rooted) last time a garbage collection for that specific generation was run.

This is not completely true… it could be that there is a problem with !gcroot in this specific case causing it to not find the root but this would be pretty rare. It could also be that you are running the workstation version where only partial collections are done if the garbage collection take too long, since the workstation GC is optimized for applications with UI and we dont want to block the UI threads causing the UI to flicker.

Short of this, we can go back to “the object was alive during the last collection” and take a look at some of the cases for this

Garbage collection is allocation triggered except for in a few cases such as the app manually calling GC.Collect() or the ASP.NET cache reduction mechanism for example.

What does this mean?

Simplified, each generation (0, 1, 2 and large object) has a limit. I.e. how much data can be stored in each generation before a garbage collection occurrs.  These limits are dynamically changed based on the applications memory usage.

When the application allocates a new object and the limit for generation 0 is exceeded, a gen 0 GC is triggered. Objects in use (rooted) are then moved to Gen 1. If this causes the Gen 1 limit to be exceeded a Gen 1 GC is triggered and objects in-use move to Gen 2 etc.  Gen 2 is currently the max generation. Large objects are treaded separately.

So let’s say you perform a stress test, and then leave the server idle for 4 hours with no requests, this means that no new allocations are made and thus no new GC’s are triggered so the memory used for the process will never get reduced. In essence, this does not mean that you have a memory leak, it just means that you are not triggering any new GC’s.  A proper stress test should have some kind of slow-down period after the main stress.

Going back to the objects that are not rooted…

So now we know that the most likely reason is that a garbage collection of that generation has not occurred since the object was unrooted.

Another alternative is that the object has a finalizer method, and thus is registered (and rooted) with the finalizer and will therefore be held until the finalizer thread gets around to finalizing it.  Or it could be a member variable of an object with a finalizer.

This would not show up in !gcroot, but you can see the object show up in !finalizequeue. 

Implementing a finalize method (even if there is no code in it) will automatically put your object on the finalizequeue which means that your object will survive at least one garbage collection, therefore you should carefully consider if your object really needs a finalizer.

Worst case scenario your finalizer might be blocked so it will take a long time for your object to be finalized if ever…   (run !threads to identify the finalizer thread and check if perhaps it is stuck when finalizing objects).  I have on my todo list to write a “case study” on blocked finalizers.

Another alternative is that the object you are looking at is a “large object”, or a member variable of a “large object”. Garbage collection of the large object heap is much more infrequent than the small object heap, which means that any object that is stored on the large object heap may stay around for a substantial amount of time.

Finally, if you have a repro, you can try calling


and see if your object is still around after executing this.

This will garbage collect all generations (including large object), then execute any finalizers, and then garbage collect again to take care of all the objects that had finalizers.

The specific question in the comment was “could this be because it is used in interop?”

The answer is no, w
ith interop if your object was still in-use it would be rooted in a refcount, or as a pinned object or similar, depending on how it was created and used.

Note: this is by no means an exhaustive list of all reasons, it’s just the most common ones that came to my mind when reading the comment, but at least hopefully it should somewhat explain why you see unrooted objects on the heap…

I would recommend that you take a peak at Maoni’s blog on using the GC efficiently (in the blogs i read section) if want an interesting read on what the GC does.

Happy debugging!


Comments (20)

  1. ricom says:

    Ammend “the object was alive during the last collection” to “the object was alive the last time the heap was compacted” and you’d cover the workstation GC as well.   That also covers cases where the GC declines to compact the heap because it seems not worth the expense of doing so.

  2. Tess says:

    Completely agree:) thanks for the clarification…  

    and speaking of blogs to read if you want to know what the CLR and the GC really does under the covers, Rico’s is one to subscribe to…

  3. A few days ago I posted a question I had gotten on email (look here for complete post):   "We use…

  4. Netanel Livni says:

    Is there any way to compact the allocated memory used by the large object heap.  As it gets more and more fragmented, we see our allocated memory grow to mamoth sizes.

  5. Tess says:

    Hi Netanel,

    There is no way to force compaction.  Technically you could collect it using GC.Collect but it is not recommended. Instead you should take a close look at what is on the large object heap and try to avoid using it as much as possible through chunking up the data etc. since high usage of the LOH causes high CPU in GC.

  6. A few days ago I posted a question I had gotten on email (look here for complete post): " We use Page.Cache

  7. get stuck poor boy says:

    I created an instance of a class through assembly.CreateInstance() method. I want it to be GC when I ran some of the methods inside it. To make sure, I call GC.Collect() after finish, but it had not been removed, and with a for loop for 10 times with 10 GC.Collect(), my memory increase without stop.

    Is there any attention with CreateInstane() ???

  8. get stuck poor boy says:

    Oh, and how can I trace the references to my object???

  9. Tess says:

    Without the details of the repro test I’m going to guess that your object is still kept alive on the stack (referenced by a stack pointer) when the GC.Collect call is made.   Testing stuff like this in a loop is tricky,  preferably if you want to test it you should have another button (outside of the one with the loop) that calls GC.Collect, GC.WaitForPendingFinalizers and GC.Collect again to make sure it has been finalized as well if it needs to be.

    If that is not enough, run !GCRoot on the object to see where it is rooted

  10. Wallace says:

    Very useful and good article, but I have 1 question.




    We only have 0, 1 and 2 generation, so what is GC.Collect(3) do?



  11. Tess says:

    We have 0, 1, 2 and the large object heap.  GC.Collect(3) or GC.Collect() performs a full collection of all of these

  12. During our ASP.NET debugging chat there were many questions around the GC and the different generations.

  13. Montu says:


    I am using the WMI queries in my Windows Service. My service has a timer which performs the WMI queries to fetch the currently running process from the machine and write it to a log for every 60 seconds. The process of fetching all the Running processes and writing the log takes 4 seconds only. When i see my Task Manager for the first 60 secs. the mem usage is 7 MB and when the process runs, it goes till 19 MB and never come down again. I have disposed all the objects and not sure what i am missing. I am using GC.Collect, GC.WaitForPendingFinalizers and GC.Collect in the timer when the operation is over but still its not reducing the Mem Usage. Any help on this would be appreciated.

  14. Hi There, I can’t believe I’ve found someone who can help me…

    First, I would like to say that I love your blog and weekly I read the work of yours.

    After of lot reading I finally came into this post and I very much hope you can answer me.

    I came across I problem that I have read in many forums and blogs. And people actually doesn’t have a good solution for it: .NET memory consumption.

    Microsoft guide lines blame our system design and at the same time I don’t believe that I have a poor design.

    I have the following VPS environment that host my asp.net application:

    Windows 2003 Server

    500 MB

    8 processors

    The ASP.NET is layered as following:

    1 – UI layer: ASP.NET files

    2 – Service Interface: An façade object that intermediates the UI layer and Business Layers

    3 – Business Layer

    4 – Data Access Layer

    This is a Microsoft guide line design standard and most of the architects should be familiar with it.

    In the service layer, I implement IDisposable for all objects and I believe that when GC collects an object in the service layer it will work at all layer beneath.

    On the method implementation of IDisposable I have the following code:



    This application used to hold almost 200MB when I saw in the Task Manager.


    I have done I lot of research looking the for better performance I used many tools for debugging memory, performance counters and so on. So, I changed a lot of users controls for asp.net pages, I did a some changes following the Microsoft guidelines for .NET performance and managed code performance.


    I finally made my ASP.NET application start up holding 30MB (before was 70MB).

    When my 3 users start working in it, the asp.net work processor goes to 70MB.


    When users try crystal reports for pdf or word creation, the application goes to 150MB depending of the size and report complexity.

    These reports are generated on the UI layer with the data that came produced form the Business layer and other layers beneath. Also, when I run heavy logics in the business layer (looping 10 thousand items) the memory goes up quickly.


    I have done so many things and some many changes looling to improve the application because I would like to host another application on the same server.

    The code following code works for me but nobody recommends to use it and I don’t believe that this is good idea to use on ASP.NET environment.




    So, the Simple question is: How can I free the memory and stop an ASP.NET application from keeping holding so much memory before the IIS recycle works?

  15. Tess says:


    First off it is important to know what you are looking at in taskmanager, i.e. if it is working set, private bytes or virtual bytes.  Preferably you should instead look in perfmon and look at #Bytes in all heaps to see if it is really your .net objects that are the cause of the memory increase.  

    As you know .net reserves memory in heap segments (32 MB/16 MB or 64 MB depending on .net framework version or GC mode), so if you are looking at virtual memory then that could potentially go up and not go down, but having a high amount of virtual memory reserved (if it is not high enough to cause an OOM in the app) doesnt really matter as long as it isn’t committed.  I say this with some reservations, but for example if you added another application to the same application pool it would then be able to use memory out of these reserved segments.  If you have it in a different app pool then it wouldnt matter either as you would swap unused memory pages from RAM out to disk.

    Now, if it is in #Bytes in all heaps, there are two reasons why they wouldnt be released

    1. if you have a reference to them (in that case go through some of my memory investigations to debug it further) or

    2. the GC hasnt run yet and collected them.

    In case #1 a GC.Collect wouldnt help, and in case #2 you still wouldnt need to do a GC.Collect because if you were to allocate more memory so that you needed the memory that is currently used by these objects, then a GC would run to collect these anyways…

    Note:  If there are no allocations, the GC will not run, so if you create a large report and use up 150 MB and then let the process idle, the memory will not get released as mentioned in the article…

    Basically, its hardly ever a good thing to run GC.Collect because you end up messing up the GCs optimization, and in 99,99% of the cases you actually don’t need it.  

    On the other hand, an application that uses 80 MB to create a report (150-70) doesnt seem like a very scalable application.  Sounds like you might want to look into why it needs to use this much for that operation…  

  16. Venkatesh says:

    "since the workstation GC is optimized for applications with UI"

    Can you please explain this in detail. Actually our application is getting OutOfMemory Exceptions when it is being used for some time. This is an WinForm application with Three tiers. This is being on Terminal Servers. Can you please tell me if Terminal Server Garbage Collection will be different than the normal Desktops. We are getting different feedbacks from Terminal Server users and Desktop users.

  17. Tess says:

    The concurrent workstation version is optimized for UI applications in the sense that it tries to minimize the time that the process is blocked while doing a GC.  This means that the collections are not as thorough, which in turn means that some objects stay around for longer than they would if using the server gc.

    Someone would have to take a look at your specific case more in detail, but there is nothing specific about the GC on terminal server.  What it sounds like might be happening, is if you have multiple people logging in to the same terminal server session, so you have a lot of instances of your app running, you might be running into a situation where you are simply running out of RAM+page file, so you are running low on the amount of virtual memory you can use, therefore getting an OOM.

  18. yanli says:

    I’ve been reading some of your posts and found them very useful in helping me tracking down an OOM issue at production environment. However I haven’t completely figured out the root cause yet so I wonder if you could help take a look at my case.

    We have a CAO remoting object hosted by IIS. The remoting object loads large amount of data and performs lengthy tasks. I know its not good practice but the application is written and is not easy to rewrite so we have to work with it. The remote object implemented IDispose and client calls Dispose at the end to release all the resources.

    The application runs on a Win2003 server with 3.5 GB RAM. In machine.config file we set memory limit to 60% so in theory ASP.NET could use up to 2GB RAM. In performance monitor the CLR memory LOH and GEN 2 shoot up to 800MB and 200MB respectively , and the process private bytes also shoot up to close to 1 GB so most of the memory are managed objects.  I noticed that both GEN 2 and LOH kept increasing and never going down during and after processing. After the process finish I performed a hang dump, and found that the remote object is rooted (very long life time) but only has 7K in size (so Dispose is effective in releasing objects). I tracked down many objects and they are all UN-rooted. But when I perform the same task again (create another CAO instance and processing large amount of data), we got OOM. The Gen 2 GC collect count stayed flat at 15 since the first test. There are many Gen 1 collects, but I couldn’t figure out why there is so few Gen 2 collects. Since most objects on the heap are not rooted, why it doesn’t perform full collect when large allocation is needed? Could this be specific to Win2k3 with .net 1.1 sp1? Our client doesn’t have a different server to test so I don’t know if the issue will appear in different environment. Since RAM is huge, would the page file size matter? They currently set at 2GB, but we’ll do some more testing with different size later. I haven’t used GC.Collect yet, but if nothing works I might try. The application is complex and the rewrite effort is huge.

    I’d appreciate if you could offer any advice.

  19. Tess says:

    It’s very hard to say without looking at the heap, but you can try running !objsize without parameters to see if there is something that is holing on to a lot of large objects

  20. Scott says:


    If you have implemented IDisposable on your objects, then are you making sure that you manually call Dispose() on each object that you no longer need?  You must manually call Dispose() or the objects will get put on the finalization queue and stick around for at least one more generation.

    Also, you should not implement IDisposable at all on an object unless you absolutely must use it to free up unmanaged resources.  If the object does not explicitly contain any unmanaged resources, you should not be implementing IDisposable (or a finalizer/destructor) at all.  I recently discovered a problem in our application in which we were allocating many instances of a certain type of IDisposable object, and the finalizer thread couldn’t keep up, so the memory use just ballooned and then threw an out of memory exception.  It turned out that there was no reason for the object to be IDisposable, so I just got rid of the IDisposable code and the finalizer/desctructor, and the application ran nicely again.

    Hope this helps,