What do you want to know?


In my previous Using GC Efficiently entries I’ve basically covered all the big areas of GC in the CLR. There are of course a lot of things to write about GC but I want to keep GC users as my target audience, not GC designers/implementors. So I would really like to hear from you – our customers of the .NET Framework – if you have specific things in mind that you would like to know about our GC that I haven’t covered, please feel free to post them as comments on my blog.


Comments (13)

  1. Maoni,

    What are the performance implications of using the gcroot clas?

    One obvious and easy to measure implication is the decreased speed of pointer creation, destruction and dereferencing, but that’s a fairly visible cost that we can chose to pay or not to pay.

    One thing I don’t know how to measure is the performance hit that GC takes when there are many gcroots living.

    This is a fairly important question for anybody doing complex C++ interop.

    Thanks,

    Dejan

  2. Harvey says:

    Hi Maoni,

    Have you changed the way the Large Object Heap works with .Net 2.0?

    Thanks,

  3. What happen in terms of garbage collector performance when the program data largely exceeds the available amount of RAM? (It happens quite often in ‘data mining’, my research area).

    When should you handle ‘by hand’ disk caching, and when should you let the GC just handle everything?

  4. Ole says:

    Hi Maoni

    Is it possible to get some sort of notification from the GC when memory is running low?

    We are doing quite a bit of internal caching in our application and it would be nice to know when to start dropping cached data.

    The only way to monitor the memory situation in .Net 1.1 was to call out into the Win32 process API. Is there anything new in .Net 2.0?

    Thanks,

    Ole

  5. Pradeep says:

    Hi maoni,

    I would like to know where exactly are generation details created and maintained.

    pradeep_tp

  6. Bill Wagner says:

    I had written a few questions during your session at PDC 05 (very good content, btw). I still have not found a good answer

    1. How does a hyper-threaded processor, or multi-CPU system affect pinning and compacting (especially with the concurrent GC)?

    2. Is pinning transitive? Namely, if a pinned object has a reference to another object, is that referred to object pinned as well?

    3. Are there different performance considerations for Weak Reference with respect ot the Small Object Heap, and the Large Object Heap? Namely, are weak references more useful for many small objects, or a few large objects?

    Thank you

  7. Andrei says:

    Here’s an interesting topic that isn’t mentioned anywhere: methods that the GC uses to stop (pause) thread in Concurrent GC. Things like thread hijacking, safe points, and some others if they exist.

    i olso found an interesting question some time ago. I went something like:

    Which one of the following compares how garbage collection works in the .NET Framework between local and distributed objects?

    Choice 1

    The method of garbage collection is determined by the programmer at design time by inheriting from either System.GC.Deterministic or System.GC.NonDeterministic namespace.

    Choice 2

    They are not the same. Local managed code garbage collection uses a collection mechanism when the heap does not have room for a newly created reference object. The distributed managed code garbage collection uses a reference counting mechanism.

    Choice 3

    They are the same. Since the referenced object exists on the same heap, the garbage collection mechanism is the same whether the object is local or distributed.

    Choice 4

    They are the same. Since the reference object is managed code, the garbage collection is always deterministic and is invoked by the class destructor or Finalize() method explicitly by the programmer.

    Choice 5

    They are not the same.Local managed code garbage collection uses a collection mechanism when the heap does not have room for a newly created reference object. The distributed managed code garbage collection uses a lease arrangement similar to DHCP leases.

    Thanks

  8. PiersH says:

    maybe you could talk about how, in the 2.0 BCL, the async IO classes now pin objects that they allocate internally, thus making it impossible to avoid heap fragmentation by pre-allocating your IO buffers in bulk.

  9. gpitou says:

    A few of the developers I work with have seen a situation (under ASP.NET) where the LOH grows faster than the collection rate. This happens when there’s a heavy load on the server. At some point, IIS panics and recycles the app. What someone noticed was that by setting the LOH mem references to null, this problem goes away. Why would explicitly setting a reference to null prevent the LOH from growing faster than the collection rate? While on the subject of null, would there be any reasons, in general,  to set references to all mem (large or small) to null? Why or why not?

    BTW, nice job at the PDC. I made it a point to stay long enough to hear your talk and was not disappoined.

  10. Bill:

    "2. Is pinning transitive? Namely, if a pinned object has a reference to another object, is that referred to object pinned as well?"

    No. Pinning is not transitive.

  11. Eugene says:

    Maoni,

    I would like to know more about write barrier.

    Thanks.

  12. i do not want blog at all....delete says:

    I DO NOT WANT BLOG AT ALL….DELETE…MY COMUTER HAS MANP ROBLEM/?/  fix the p

  13. i do not want blog at all....delete says:

    I DO NOT WANT BLOG AT ALL….DELETE…MY COMUTER HAS MANP ROBLEM/?/  fix the p