When to call GC.Collect()

One of my first postings was this one: Two things to avoid for better memory usage in which I gave some approximately correct advice (is there any other kind? <g>) about using GC.Collect().  I still stand by this advice but I think maybe this is a good time to expand on it.  So now I offer you Rico's rules for calling GC.Collect()

Rule #1


This is really the most important rule.  It's fair to say that most usages of GC.Collect() are a bad idea and I went into that in some detail in the orginal posting so I won't repeat all that here.  So let's move on to...

Rule #2

Consider calling GC.Collect() if some non-recurring event has just happened and this event is highly likely to have caused a lot of old objects to die.

A classic example of this is if you're writing a client application and you display a very large and complicated form that has a lot of data associated with it.  Your user has just interacted with this form potentially creating some large objects... things like XML documents, or a large DataSet or two.  When the form closes these objects are dead and so GC.Collect() will reclaim the memory associated with them.

Now why would I suggest this as a possible time to call the collector?  I mean, my usual advice goes something like "the collector is self-tuning so don't mess with it."  Why the change of attitude you might ask?

Well here is a situation where the collector's tendancy to try to predict the future based on the past is likely to be unsuccessful.  In a well behaved client application, even while the form is running almost all the objects will die in generation 0 or generation 1.  Only the initial objects associated with the form will go into generation 2.  Any objects that do find their way into generation 2 are likely to stay there for the duration of the form.  Now this is a great situation from a performance perspective.  The Gen0 and Gen1 collects are nice and cheap.  Promotion to Generation 2 is nice and low.  The collector is thinking to itself "Ah, life is good.  There's no trash in gen2 and everything is dying in gen1 and gen0.  No need for any nasty gen2 collections."  And sure enough the collector is dead right.  There is no need for a gen2 collect.  Your dialog will get great gc performance.  Everything is wonderful.

Right up until that close button is pressed.

When the form is (finally) closed all those objects related to the form that were in generation 2 just died.  But if your application is well behaved even after the dialog is closed everything is still tending to die in gen0 and gen1.  It all looks the same to the collector.  It doesn't know a dialog just closed and lots of long lived objects are now dead.  With nothing new going into generation 2 it still looks like a collection would be a waste of time.  But it isn't.

The way this looks to you observing the client application is that when the form is closed hardly any memory is returned to the system.  If you look at the GC counters you'll see a big GC heap.

Contrast this with say a web server that is getting regular requests.  When a web form is finished, sure there's some junk in gen2 (hopefully not too much or you'll get mid-life crisis) but another form is coming along in just a few milliseconds and so there will be objects going into generation 2 for that form at the usual rate.  All of this is nice and predictable and the collector will soon get into a nice groove.  It's the repitition that's making everything wonderful in the server case.

So, when a non-repeating event involving a lot of object deaths occurs (such as the completion of startup-time work, or the closing of a big dialog), consider using GC.Collect() at that time to reclaim long lived objects.

Don't bother doing this if there aren't a lot of old objects involved.

Rule #1 should trump Rule #2 without strong evidence.

Comments (65)
  1. Matthew W. Jackson says:

    Very informative, as usual. I’ve often wondered if it would ever be wise to collect after closing a form. While I’ll be sure to remember rule #1, I’ll be sure to consider rule #2 when a form works with large amounts of data.

    Which brings up another case where I’m wondering whether it *might* be okay to call GC.Collect():

    Although I haven’t had the chance to test this idea, I’ve always thought that it might be good for a managed game to call GC.Collect before loading a new level. For example, the game could show a "Level Loading" screen, call GC.Collect() to make sure the previous level isn’t wasting memory, and then go on to actually load the new level data.

    I know that this wouldn’t be a one-time event in the same way as you describe, but in the context of a game I wouldn’t it be better to force a garbage collection at a time when the user will already be waiting. Since the amount of time between loading levels may vary drastically, will this actually help the garbage collector?

    Are you saying that the self-tuning garbage collector would actually work better in this case? What about garbage collecting while a game is paused?

    Maybe I need to get a game written and go "measure." I’m really curious as to how well the garbage collector can perform in a variety of scenarios. I’d really like to see a few more games using managed code (at least for higher-level logic), and I’d hate to think the GC would be a big bottleneck.

  2. timts says:

    I wrote some simple console application. the problem is that it loads stuff from database continuously at high speed, so GC cant keep up with it!!!

    even gc.collect does no good so I have to do a forced global gc to fix the memory usage problem.

    it has nothing to do with forms.

  3. Darren Oakey says:

    hmm…. I would disagree…

    how about rule1: use GC.collect often and with a vengeance! 🙂

    In my experience, the .net GC is a piece of **** and shouldn’t be trusted. We have found a number of production bugs that have been solved by a single introduction of GC.Collect.

    try it – make any batch routine that say – traverses your disk, opens each file and computes a checksum, and run it with task manager open – it’s quite an eye-opener. The memory just goes up… and up… and up…

    Now, insert the lines GC.Collect(), GC.WaitForPendingFinalizers() after processing each file. Your job will run _quicker_, because the system isn’t continuously allocating memory, and the memory usage of your program will remain constant.

    I would say

    rule 1: Use GC.Collect at the end of any major "operation", or any form finishing

    rule 2: In any loop situation, always follow it with GC.WaitForPendingFinalizers

    rule 3: explicitly dispose anything that it’s possible to dispose – to a large extent, pretend the GC doesn’t exist, and you are in an old language.

  4. Denis Sakarov says:

    Great post and yes it’s all a matter of policy but there are more caveats to consider before applying Rule #2. Here’s one : – Consider the following scenario, apps detects big form is closed so it calls GC.Collect(). The app hangs (or at least responsivness decreases) while the GC clearying Gen 2.

    Here’s how I would do it. The user closes the form *and* she minimizes the app. Now you can call GC.Collect() because it’s less likely the user will restore the app than it is for her to do something else in the app right after she closes the form.

  5. >if you’re writing a client application and you

    > display a very large and complicated form that

    > has a lot of data associated with it.

    Using Excel via InterOp seems to fit in this category. I’ve run into places where Excel will not quit unless you explicitly do GC.Collect() like described in this MS knowledge base article


  6. Kent says:

    Thanks, great article Rico.

    @Darren. I thought I’d try your suggestion. Here is my code:


    using System;

    using System.Diagnostics;

    using System.IO;

    using System.Threading;

    namespace GCTest {

    class Entry {

    static void Main(string[] args) {

    Console.WriteLine("GCTest [-gc]");

    bool collect = false;

    if ((args.Length > 1) && (args[0] == "-gc")) {

    collect = true;


    string[] rootDirNames = Directory.GetDirectories(@"C:");

    DirectoryInfo[] rootDirs = new DirectoryInfo[rootDirNames.Length];

    for (int i = 0; i < rootDirs.Length; ++i) {

    rootDirs[i] = new DirectoryInfo(rootDirNames[i]);


    Thread infoThread = new Thread(new ThreadStart(OutputInfo));

    infoThread.Name = "info";

    infoThread.IsBackground = true;


    DateTime start = DateTime.Now;

    ulong sum = ComputeSum(rootDirs, collect);

    DateTime end = DateTime.Now;



    Console.WriteLine("Sum computed: {0}, took {1}", sum, end – start);



    private static long numFiles;

    private static ulong ComputeSum(DirectoryInfo[] dirs, bool collect) {

    ulong retVal = 0;

    foreach (DirectoryInfo dir in dirs) {

    if (dir.Exists) {

    foreach (FileInfo file in dir.GetFiles()) {

    if (file.Exists) {

    retVal += (ulong) file.Length;

    Interlocked.Increment(ref numFiles);


    if (collect) {





    retVal += ComputeSum(dir.GetDirectories(), collect);



    return retVal;


    private static void OutputInfo() {

    Process p = Process.GetCurrentProcess();

    int initialVM = p.VirtualMemorySize;

    int initialWS = p.WorkingSet;

    int initialTotal = p.VirtualMemorySize + p.WorkingSet;

    while (true) {


    int vm = p.VirtualMemorySize;

    int ws = p.WorkingSet;

    int tot = p.VirtualMemorySize + p.WorkingSet;

    Console.WriteLine("Virtual Memory: {0} (delta {1})", vm, vm – initialVM);

    Console.WriteLine("Working Set: {0} (delta {1})", ws, ws – initialWS);

    Console.WriteLine("Total Memory Usage: {0} (delta {1})", tot, tot – initialTotal);

    Console.WriteLine("Number of files processed: {0}", numFiles);








    With my very simple testing, this ran faster without explicit GC’ing.

    Maybe I’ve just done something stupid in my code but I’d be interested to see what other people find.

    My C: has 97217 files on it. I ran a Release build without an attached debugger. I choose to drink Coke.

  7. Chris Nahr says:

    I had a scenario similar to what Matthew Jackson described. In a strategy game written in C#, I let the AI work in a background thread on a deep copy of the current game configuration. As the game turns progress, many deep copies are created and swapped with each other, making earlier copies obsolete.

    Now when the configuration gets big (10 MB or more), I experience pretty much what Darren Oakey said. Performance is dramatically better when I do an explicit garbage collection before performing a deep copy.

    GC "auto-tuning" really seems quite helpless when lots of big objects are allocated, and automatic GC is for some reason much slower than manual GC in this case.

  8. Paul Hill says:

    > Here’s how I would do it. The user closes

    > the form *and* she minimizes the app.

    > Now you can call GC.Collect() because it’s

    > less likely the user will restore the app

    > than it is for her to do something else in

    > the app right after she closes the form.

    Except minimising means they’ve moved onto something else, and on a constrained system you’ve just forced the GC to hard fault page after page while it collects. On a workstation the GC runs in it’s own thread, so responsiveness isn’t typically an issue.

    > Performance is dramatically better when I do

    > an explicit garbage collection before

    > performing a deep copy.

    Have you thought about serialization rather than a deep copy in this instance?

  9. GC.Collect() Good or Bad?

    1. Mohit says:


  10. Eric Newton says:

    You guys are running into the GC not knowing the true memory pressure of some of the Gen2 objects that represent unmanaged resources.

    Framework v2 introduces an additional metric (in theory that should be applied to unmanaged resources wrappers) that indicate to the GC the true memory pressure of the given managed object.

    For instance, consider a managed class that represents a 100KB byte array that is in the unmanaged world… to the GC, the managed object is only a measly 100 bytes (basically the pointer to the 100KB unmanaged bytes), so it gets a much lower memory pressure priority then a managed object with a much more complex call graph. Framework v2 introduces GC.AddMemoryPressure (or similar) that allows you to "attach" the 100KB byte array (as a metric only) to the managed object. So now the GC sees the managed object has causing a 100KB + 100byte memory consumer, and will prioritize accordingly.

    So for Framework v1.*, GC.Collect() makes sense after certain un-often run areas of the app, but even this changes for Framework v2.

    Windows Forms objects will get a semi-automatic upgrade because the object that supplies Handle property that is present on all Form controls is being upgraded to include this metric.

    Use this search to find out more info from various .Net power-speakers (including EricGu), regarding Framework v2 memory pressure:


  11. Chris Nahr says:

    Paul, I’m afraid I don’t understand your question:

    "Have you thought about serialization rather than a deep copy in this instance?"

    All deep copies that are not discarded must remain in memory. The user looks at the old configuration while the AI works on a deep copy. If there is a prediction tree, multiple deep copies must remain in memory for the AI. How would serialization help me here?

  12. Rico Mariani says:

    Some thoughts on this:

    The most important thought is that it is astonishingly hard to comment on the performance of other people’s applications in any intelligent way. So I’m going to take some guesses based on experience that I think are generally useful but please don’t take this to mean that I think you’re wrong about your specific situation. I’m sort of in the advice business so I have to assume it’s "horses" and not "zebras" even though I know sometimes it really is "zebras."

    OK with that disclaimer here’s the real thoughts 🙂

    1) Eric is absolutely right that in Framework v1.1 there was no way to explain to the garbage collector that you have some managed objects that are holding on to a whole lot of unmanaged memory (e.g. bitmaps). In those cases the aggressiveness of the collector should be related to the full memory pressure, not just the managed size of the holder. Framework 2.0 introduces GC.AddMemoryPressure() and RemoveMemoryPressure() which can be used to indicate the "collatoral" allocations associated with managed objects so that the GC has a clue.

    2) When you have objects like in (1) it is vitally important that you use the Dispose pattern to reclaim as many as possible. If some have exotic lifetime they will be collected for you of course but it is much better to Dispose all the ones you can.

    See this article for more information:


    3) If you are in a situation where regular calling of GC.Collect() in some cyclical fashion is necessary you are going to be in a world of hurt performance-wise. Forced collections like that are going to wreak havoc with the statistics gathering ability of the collector — I have this mental image of an engine with too much fuel or not enough air or both (or is that really the same thing <g>)

    If you have an application that has a lot of objects that are going into Generation 2 and then dying (some of the above sound like that *could* be what’s going on) then you are making things very difficult for the collector. You could have "mid life crisis" (see http://weblogs.asp.net/ricom/archive/2003/12/04/41281.aspx and http://weblogs.asp.net/ricom/archive/2004/02/11/71143.aspx)

    Now it could be that you reclaim memory faster by forcing more collections, but doing that on a regular basis will cause your percent-time-in-GC to shoot up. You are more likely to get better performance by adopting a strategy that avoids cyclical deaths of objects in Generation 2. The two links discuss ways of accomplishing that.

    4) Notwithstanding the bad experiences some of you have obviously had, collecting often isn’t the right answer. If you must begin coding with the supposition that the GC is a piece of @#$% really you’re doomed from the outset. I certainly will not say that managed code is the best solution to every problem — I used to work on the C++ team and I can tell you those tools have never been better. If they are the right answer then by all means reach for them. After all I’m in the developer productivity business not the developer punishment business.

    5) Keep in mind that we tune the collector for more and more scenarios over time so you can expect it to be stable under an increasingly wider variety of situations with each release. When adopting a new version of the platform you should reconsider your previous uses of GC.Collect() for your next release. There may be more cases where the collector is doing the right thing automatically.

    In summary, GC.Collect should be an episodic thing (i.e. some major event just occurred) not a cyclical thing. If your normal processing is such that a GC.Collect is necessary to get decent memory usage that’s a very bad sign — you are likely to get awful performance. Find a way to have more collector-friendly object lifetimes (see links above for some ideas)

  13. Matt says:

    A question…

    When is the GC called by the system?

    If I remember the .NET memory model, all it is is a pointer to the top of the stack (or heap, I always mix the two up…) which is incremented for new objects being created. The GC is called when the pointer reaches the top of available memory, right?

    Are there any other specific instances when I can count on it being run?

  14. Matthew Wills says:



    Using Excel via InterOp seems to fit in this category. I’ve run into places where Excel will not quit unless you explicitly do GC.Collect() like described in this MS knowledge base article


    If you actually read the article, you will see that it doesn’t recommend using GC.Collect. That is merely a last ditch solution if you can’t work out how to do it properly using ReleaseCOMObject).

    If you ReleaseCOMObject *everything*, GC.Collect is unnecessary when dealing with Excel.

  15. Ole Thrane says:

    We have also experienced a lot of GC related problems. Usually in connection with the ASP.NET Cache object.

    The problem is that putting lots of objects into the Cache (within a short time) will eventually cause an OutOfMemoryException in the ASP.NET process.

    We have concluded that even though the Cache sheds the objects under memory pressure, the GC may not run soon enough after that to avoid the OutOfMemoryException (as new objects are still being created).

    We have not been able to find a solution for this problem. Calling GC.Collect periodically helps a lot in avoiding exceptions, but hurts performance (a lot!).

    I guess that the problem is related the the ‘mid-life crisis’ described above, but in this case it is the Cache that causes the crisis, so what can we do?

    In my opinion, it should be possible to tell the GC when an object is no longer needed. This way the cache could inform the GC of all the objects it sheds. The GC would then know to clean these up even if they had been promoted to gen. 2.

    I really hope that all this will be better in (ASP).Net 2.0.

  16. Denis Sakarov says:

    >Except minimising means they’ve moved onto

    >something else,

    >and on a constrained system

    >you’ve just forced the GC to hard fault page

    >after page while it collects.

    Not necessarly, but it’s also good practice to try to determine whether the user is idle (after minimization) in your decision to force a GC. This can easily be baked into the system’s GC policies.

  17. Darren, were you actually remembering to close the files you opened?

  18. Sherif says:

    I can’t agree more that calling GC.Collect is harmful in many cases, but how do I act if I see GC not working as expected. I have a problem that I tried to solve with calling GC.Collect() and the result was making the problem worse.

    My ASP.NET application keeps leaking memory until the worker process gets recycled (2-3 times a day).

    I looked at the performance counter and found that Gen 0,1,2 and large heap sizes are 98MB, 7MB, 130MB and 1MB respectively. While the number of Gen 0, 1 and 2 collections 157, 76 and 10. The percentage of time in GC was 0.032%. The statistics were after 28000 requests. As you can see, the number of garbage collections is very low. So I put a GC.Collect() call at the EndRequest event handler. The result was that Gen 0 heap size was 1MB down from 130MB, but the % of time in GC was 99.6%. So GC.Collect() made the problem worse, so I removed it but my problem still exists.

  19. RichB says:

    Rico, is the same logic true for WaitForPendingFinalizers() as Collect() ?

  20. Rico Mariani says:

    Probably more so Rich — after all it doesn’t make a whole lot of sense to call WaitForPendingFinalizers unless you just forced a collection.

    So if you forced a collection for the reasons above and you expect (i.e. have measured) finalizable objects then it might be useful. But even then it’s of limited usefulness unless you need to synchronize your thread with the release of those objects for some exotic reason.

    I suspect that (by far) the most common reason is for reporting available memory accurately.

  21. XNA Diaries says:

    While the slides and recordings from Gamefest are not yet available on the&amp;nbsp;conference website, Rico…

  22. This problem actually comes up pretty often so I thought I’d write a little article about it, and a couple

  23. In Orcas we’ve added an overload to System.GC.Collect(): void System.GC.Collect(int generation, System.GCCollectionMode

  24. says:

    Bei Microsoft zu arbeiten ist Himmel und Hölle zugleich! Täglich finden sich tausende interessante Dinge

  25. Scott Dorman says:

    .NET 3.5 changes to GC.Collect

  26. NDepend v2.9 comes with a set of default CQL rules concerning usage guidelines of the .NET Framework

  27. NDepend v2.9 comes with a set of default CQL rules concerning usage guidelines of the .NET Framework

  28. NDepend v2.9 comes with a set of default CQL rules concerning usage guidelines of the .NET Framework

  29. Chris Lyon here. You may remember me from such blogs as How I Learned to Stop Worrying and Love the GC

  30. &#160; Someone asked on a DL about when to force a GC using GC.Collect. This has been answered by many

  31. In native world, one interacts with OS directly by calling Win32 APIs for managing resources (like allocating/deallocating

Comments are closed.

Skip to main content