My last entry was some generic advice about how to do a good performance investigation. I think actually it’s too generic to be really useful — in fact I think it fails my Peanut Butter Sandwich Test.
Digression to discuss the Peanut Butter Sandwich Test
I review a lot of documents and sometimes they say things that are so obvious as to be uninteresting. The little quip I have for this situation is, “Yes what you are saying is true of [the system] but it’s also true of peanut butter sandwiches.” Consider a snippet like this one, “Use a cache where it provides benefits,” and compare with, “Use a peanut butter sandwich where it provides benefits.” Both seem to work… that’s a bad sign.
You certainly don’t want to get an F on the Peanut Butter Sandwich Test but hopefully you won’t settle for just a C-.
Back on topic
I thought it would be good to follow up the generic advice with some specific suggestions for things to look at. These are things I look at in step 2 or 3 of the investigation.
Under .NET CLR Memory, check “% Time in GC” if it’s getting near 10% or higher you may have some memory issues, consider these secondary tests:
- is the raw allocation rate “Allocated Bytes/sec” too high? -> reduce total allocations
- is the promotion rate “Promoted Memory from Gen 1” too high? -> be careful about object lifetimes, avoid “mid-life crisis”
- is the finalization rate “Finalization Survivors” too high? -> make sure you are disposing the key objects
- is the heap growing when it shouldn’t “# Bytes in all Heaps” -> check for reference leaks
Is the CPU not saturated when it should be? Look under .NET CLR LocksAndThreads
- is the “Contention Rate / sec” counter high compared to your throughput rate? -> you should re-examine your locking strategy
- is the “# of current physical Threads” too low for the problem? -> (ammended) more parallelism may be helpful, consider using the ThreadPool if not already in use, possibly adjust ThreadPool parameters to get more threads (not usually needed)
- in the “Thread” category examine “Context Switches / sec”, is this high compared to your throughput rate? -> perhaps the workitem you are giving threads in the thread pool is too small, consider something chunkier
Is the throughput rate low even though the CPU is saturated?
- look under “.NET CLR Exceptions”, is “# of Excepts Thrown / sec” high compared to your throughput? -> consider reducing use of exceptions in common paths
- look under “.NET CLR Interop”, is “# of marshalling” growing too fast? -> consider simplifying the arguments passed in interop cases so that marshalling is cheaper
- look under “.NET CLR Security”, is “% Time in RT checks” significant? -> consider simplying the demands being placed on the security system to lower the cost of security checks
- look under “.NET CLR Jit”, is “% Time in Jit” significant? This counter shouldn’t stay high because jitting should settle out, if it remains high then perhaps there is dynamic code generation via reflection going on -> simply dynamic code cases
This just a taste of course, and each of these items would likely lead to further investigation with a profiling tool that is suitable to drilling into that particular kind of problem but these are examples of leading indicators that I use.
For more information on the GC Performance counters specifically see Maoni’s blog entry on that subject. Her most recent article is on using the GC efficiently also very interesting, lots of good details there.