No GCs for your allocations?


Several people mentioned Java’s “no GC” GC proposal and asked if we can add such a thing. So I thought it deserves a blog post.

Short answer – we already have such a feature and that’s called the NoGCRegion. The GC.TryStartNoGCRegion API allows you to tell us the amount of allocations you’d like to do and when you stay within it, no GCs will be triggered. And you are free to revert back to doing normal GCs when you want to by calling GC.EndNoGCRegion. This is a better model than having a separate GC.

Long answer

It’s a better model because you have the flexibility to revert back to doing GCs when you need to. Would you rather have to spin up a whole new process just so that you can then do work that needs to do GCs (‘cause if you don’t collect, well, your memory usage is just going to keep growing and unless you allocate really little you are going to run out pretty soon)?

And yes, there’s currently limitations on how much you can allocate with NoGCRegion. You can’t allocate an arbitrary amount on SOH – I limited it to the SOH segment size just because we’ve always had the same size for SOH segments so far. There’s no theoretical limit that segments have to be the same size. It’s a matter of doing the work to make sure we don’t have places that accidently (artificially) enforce such a limit. There are other design options but I won't get into the details here.

So this means if you are using Server GC you are able to ask for a lot more memory on SOH because its SOH segment size is a lot larger. Currently (and this has been the case for a long time) the default seg size on 64-bit for Server GC is 4GB and we do cut that in half when you have > 4 procs and again if you have > 8 procs. So you have > 8 procs it means the SOH segment size is 1GB each.

On Desktop you have ways to make the SOH segment size larger –

use the gcSegmentSize app config or
use the ICLRGCManager::SetGCStartupLimits

For LOH as long as we can reserve and commit that much memory this will allow you to allocate ‘cause LOH can already have totally different sized segments.

We choose to make sure we can commit the amount of memory you ask for instead of randomly throwing you OOM because when you use such a feature you really should have a good idea how much you want to allocate. If you do have a scenario that says “I just want to allocate till I get OOM and I don’t care if I randomly get OOM” please let me know - I’d like to understand what the rationale is.

I did have some bugs when I first checked this into 4.6.x (thanks Matt Warren for reporting). I’ve made fixes in the current release. But I wanted to explain how it’s supposed to work so you know what’s a bug and what’s a design limitation.

And we also are still paying the write barrier cost while you are allocating in the NoGCRegion – I kept it that way because when you revert back to doing GCs you do need the info that the write barrier recorded if we don't do anything special. However, if we so choose, we can just not have the write barrier for NoGCRegion and when we need to revert back to doing GCs we just promote everything that’s still live to gen2 (would be reasonable to do as at this point you are very likely done with all the temporary stuff and what’s left is justified to be in the old generation) and get the write barrier back in the picture again. I didn’t do it this way because write barrier cost generally doesn’t come up – not to say that it can’t ever be a problem. It’s just that we always need to prioritize the work based on how much it’s needed.


Comments (11)

  1. onurg says:

    This is the first time I am hearing, we could set gcSegmentSize from app.config. Is this documented anywhere?

    1. this is an unsupported entry:

      RETAIL_CONFIG_DWORD_INFO_DIRECT_ACCESS(UNSUPPORTED_GCSegmentSize, W(“GCSegmentSize”), “Specifies the managed heap segment size”)

      Here you see a list of all (undocumented) settings:

      https://github.com/dotnet/coreclr/blob/549c9960a8edcbe3930639e316616d35b22bca25/src/inc/clrconfigvalues.h

      1. Yep, it’s undocumented. And I am hoping soon you will not need to set this anymore (when we make it so it’s not limited by the segment size).

  2. Sachin Joseph says:

    OOM is out of memory I suppose, but what about SOH, LOH, etc.?

    1. Small Object Heap; Large Object Heap

  3. Mark says:

    Hi Maoni, thanks for posting this. Sounds like this could be very useful for optimizing short-lived processes like command line applications; would that be a recommended use case?

    1. I originally did this for some trading applications where people had already done a great job at optimizing allocations so they had very little. And they really didn’t want any interruption from GC during trading hours. So they could just do a gen2 to get rid of all garbage before market open and sustain in No GC region during market hours. You could also imagine a scenario where you have multiple instances of a server, and a software LB to direct requests to different instances and have them take turns to be in the No GC region to handle requests.

  4. I’m curious to see how this is being used.

    Why would I want to use this and how would I use it?

  5. Matthew J says:

    Currently I can think of any of our applications that consume enough memory for this to be viable. I’m glad I read this article since in the near future this will become and issue with a growing product portfolio.

  6. Michael says:

    Hello Maoni!

    I’ve been an avid reader of all things GC related since, I don’t know when. It’s defintely been since the last time the GC paused a web-application for a 10+ seconds on a Gen2 collection, though 😉

    Fun and games aside, we’ve noticed in our IIS hosted web applications that with our load profile, it’s benefitial to use multiple virtual machines with lower RAM-sizes instead of one VM with a lot of RAM. That way, the GC will collect more often and thus take shorter breaks.

    Recently, we’ve started to think about using a single VM with multiple AppPools and just call GC.Collect if the processes RAM allocation exceeds a predefined threshold. Now, with .NET 4.7, there is the new RecycleLimitInfo API with the RequestGC property. It would be much appreciated to learn a bit more how this API is intended to be used and how it compares to GC.Collect.

    public class CustomRecycleLimitObserver : : IObserver
    {
    public void OnNext(RecycleLimitInfo recycleLimitInfo)
    {
    If (recycleLimitInfo.CurrentPrivateBytes > 8GB)
    {
    recycleLimitInfo.RequestGC = true;
    }
    }
    }

    System.Web.Hosting.HostingEnvironment.ApplicationMonitors.MemoryMonitor.Subscribe (new CustomRecycleLimitObserver());

    Many thanks, Michael

Skip to main content