Large Object Heap

LOH (Large Object Heap) contains objects that are 85,000 bytes or bigger (there’s also some objects that are less than 85,000 bytes that are allocated on the LOH by the runtime itself but usually they are very small and we’ll ignore them for this discussion).


The way LOH is implemented changed dramatically from 1.0 to 1.1. In 1.0 we used a malloc type of allocator for allocating large objects; in 1.1 and beyond we use the same allocator for both large and small objects: we acquire memory from the OS by heap segments and commit on a segment as needed. There’s very little difference between 1.1 and 2.0 for LOH.


LOH is not compacted – however if you want a large object to be pinned you should make sure to pin it because whether LOH is compacted or not is an implementation detail that could be changed in the future. Free blocks between live large objects are threaded into a free list which will be used to satisfy large object allocation requests. Note that this is a real advantage for managed heap – we are able to be very efficient to manage fragmentation because we can coalesce adjacent free objects into a big free block. If a free block is too large we call MEM_RESET on it to tell the OS to not bother backing it up. So far I haven’t heard of any fragmentation problems from production code (of course you write a program that specifically fragments the LOH) – I think one reason is we are doing a good job controlling fragmentation; the other reason is people usually don’t churn LOH too much – one typical pattern I’ve seen people use LOH is to allocate some large arrays that hold on to small objects.


When I talked about weak references in Using GC Efficiently – Part 3, I mentioned a performance implication of using weak refs which is when you use them to refer to really small objects that are comparable in size. Make sure that you are fine with the ratio of the space that weak ref objects take up over the size of objects that they refer to. There’s no difference between using a weak reference to refer to a large and a small object aside from the fact that by definition this ratio is naturally smaller for large objects because of their sizes.


In this comment, it was asked why “explicitly setting a reference to null would prevent the LOH from growing faster than the collection rate”. Actually I think what likely happened was the large objects were held live by something therefore GC was not reclaiming the memory. By setting these objects to null just lets GC know that those objects are garbage now and can be reclaimed. You can use either CLRProfiler or the !SOS.gcroot command to verify this.


Comments (21)

  1. Anonymous says:

    > the other reason is people usually don’t churn LOH too

    > much – one typical pattern I’ve seen people use LOH is

    > to allocate some large arrays that hold on to small

    > objects.

    What about DataTables? Large DataTable’s allocates big continuous arrays for column values and there is no way to reuse one column array for another table, after disposing the first one.  

    Column of GUIDs or decimals can be allocated in LOH when DataTable contains about 5000 rows – not very much.

  2. Garry Trinder says:

    I’ve question – how 85000 was selected ? Any reasoning ?

  3. Unfortunately I can’t answer about DataTables – it’s outside of my expertise.

    85,000 bytes was selected based on tuning (running a bunch of scenarios of our own and from our partner teams).

  4. Nomad says:

    Maoni, It would be great if you look at my problem.. I’ve read all of your articles and it looks like that my trouble lies somewhere in memory defragmentation or something similar.

    Look at url that I provided or contact me at

    PS. Sorry for my poor English, I’m from Ukraine

  5. Nomad, I looked the link but I am not sure exactly how your issue is related to GC based on the info given. You didn’t say if you were using workstation GC (with or without concurrent GC) or server GC. It should be easy to determine if the thread is a GC thread if you know what flavor you are using and by looking at the callstacks from those threads.  There’s no GC threads that are specifically for "memory defragmentation".

  6. Nomad says:

    Greetings. Thanks for response.

    I’m using workstation GC now, but short test on server GC does not lead to that uncommon behaivor but just raised memory usage twice and up to OOM’s. I’ll test it again after RAM upgrade.

    I’ve tried to collect stack traces using SysInternals Process Explorer (with windows debugging symbols, of course), but it looks like execution point for that thread changes every second and all of traces are just too different. Maybe we should use some other tools to get it?

    I’ve found another strange thing – when I moved byte buffer operations (allocations, freeing, byte queue) to separate C++ assembly and done explicit memory allocation via Marshal::AllocHGlobal/ Marshal::FreeHGlobal (slower case) and c++ new/delete statements, server normal uptime (work time before "phantom thread" spawns) raised to 5-6hours, from 1-2 hours.

    Just to explain why I’ve moved to explicit allocations I’ll give you an example: there were 13649552 buffers allocated and 13648731 buffers released during 1h46m worktime. Each buffer is just a byte[] array of some size (less than 10k).

    I asked myself, is that a normal workload for workstation GC? May be it just get flooded with such numbers? But perf. counters show only about 5-8% time in GC.. I’ve tried and got such result..

  7. Nomad, try tuning concurrent GC off (look at my Using GC efficiently – part 2) – that way you can be sure there’s no threads from GC (GC would be performed on your thread that triggered a GC).

  8. Nomad says:

    Ok, I set it up. I’ll tell you of any results this evening (or morning in your time zone :))

  9. Nomad says:

    Behaivor with concurrent GC turned off changed significantly. First of all, server performance increased greatly – CPU usage shrinked in 1.5 times. For a few hours it worked perfectly, but after some time something strange happening – Gen 2 Heap raised to 450 MB, number of Gen 2 collections is increasing every second, % of time in GC is sliding between 5 and 80 percents.

    When I read your "Using GC efficiently" just about a year ago, I began to tune my app’s memory usage and always looked at performance monitor – I never had such high Gen 2 heap and collections and % time in GC always was <= 10%. It looks like there is hidden error (or a lot of errors) in my code that was not detected correctly with concurrent GC (except of strange thread).. I see, it’s time to read your articles again and again 🙂

  10. samfung2000 says:

    GC collects base on 3 condictions:

    1)      Allocation exceeds the Gen0 threshold;

    2)      System.GC.Collect is called;

    3)      System is in low memory situation;

    Can you please elaborate 1, What is the Gen0 threshold?

    I thought when small object heap runs low GC would just allocate another Segment.

    How responsive is 3?

    My app has high memory usage in a single call and seems like GC is not collecting quick enough and therefore got an OufOfMemory exception.


    PS this is good stuff.

  11. samfung2000, if you search for either "budget" or "threshold" in my blog you will see explanation of it.

    Re your question about your application getting OOM, you should verify that the objects are indeed dead. If you are sure of that please do feel free to contact our product support.

  12. Ulf says:

    Almost a year after the last 🙂

    One thing I am confused about is why double-objects are handled differently than other scalar types.

    The limit for a double-array seems to be 8K instead of 85K (which has a very bad implication of the application I am working on).

    This can be controlled by running this simple code:

           static void Main(string[] args)


               double[] array1 = new double[999];


               double[] array2 = new double[1000];



    The printout is:



    I.e. the double-array with 1000 elements (8K) is immediately allocated to generation 2,in this case LOH.

    To me this feels like a serous bug but are there anything smart here?



  13. OmegaMan says:

    I reported this to Microsoft Connect and it now being considered a bug on the doubles issue.


    Large Object Heap (LOH) does not behave as expected for Double array placement


  14. It’s not a bug – we do this for performance reasons because doubles are accessed faster when they are aligned on a 8-byte boundary which large objects are.

  15. There are few constraints on memory usage in SQLCLR modules. This freedom is one of the benefits of the

  16. alps_xing says:

    Hi, Maoni

     I used below codes to do a LOH test, and found that when the double array length is larger-equal than 124, the clone operation will increase the LOH usage. However as we know, the LOH threshold for double array should be 999.

     Does the clone operation do some boundary alignment? And why the threshold for Clone is 123?

     As below, the length of the array is 124. The array1’s generation is 0, and array2’s generation is 2, which make the LOH churned.

     If I change the array length to 123, the generation of array1 and array2 are both 0.

    using System;

    using System.Collections.Generic;

    using System.Text;

    namespace ConsoleApplication2


     class Program


       static void Main(string[] args)




           // If I change the size to 123, the LOH will not be churned.

           double[] array1 = new double[124];


           double[] array2 = (double [])array1.Clone();







    Xing Qianqian


  17. Jairo says:

    What about the Cache? if I have a LOH in the Cache (maybe a DataSet), setting the Cache object to null is not allowed (an exception is thrown), just removing it would work?? (Cache.Remove("objectId")).


  18. Shane says:

    It would be nice if a lot of the .ToArray() function’s could use recycled object’s.

  19. Jens says:

    I Know this is a rather old page, but….

    Actually we have production code where we ran into LOH fragmentation problems. So it is not only test code that can provoke this, but for all of us actually running into this, i think many finds ways around it and never really raises the issue.

    Our system is meant to treat data blocks of up to 50MB of size, either binary or text, as a "add-in" based system where all key components to the data can be an "add-in", so Input, Processing, Output is all different add-ins that can be combined in an amount of ways.

    Obviously we are using AppDomains to load and unload these, and so we easily ended in the LOH Fragmentation issues due to duplication over AppDomains where serialization simply was very messy.

    (250MB+ of unused space on the LOH before crashes on reading 50MB into memory).

    We have solved our problem to an extend, but since we still don't have control over what a 3rd party developer does in the various Add-in blocks, we would like a way to force a compacting of the LOH when we know we have the time for it, something like: GC.CompactLOH() would be nice, even if the problem might be theoretical for 99.9% of all situations, but for now i guess that won't happen.