As I have mentioned in previous posts, scheduled recycles are a good thing. You should view them as basic process housekeeping task, kind of like washing dishes or doing laundry. Scheduled recycles are of benefit primarily because they ward off problems caused by both heap fragmentation and virtual memory fragmentation. For those of you who don’t know what fragmentation is, I’ll attempt to describe it here. First, I’ll give you an analogy:
At my restaurant I have a huge parking lot. It has 200 parking spaces. In the morning my parking lot is completely un-fragmented because each night all the customers take their cars to another lot or to their homes and then it’s empty.
Once I open my restaurant, people start parking in various parking spaces. This is not a problem because I have lots of spaces and most people are just picking up takeout food for breakfast so they don’t stay very long. This means most of the cars are parked near the front of the lot and there are a lot of free spaces in large clumps. My parking lot is slightly fragmented
As we near lunch time the lot starts to really fill up and cars are parking further out in the lot and some are staying a lot longer than others. This creates clumps of cars all over the lot with only a few single spaces in between the clumps. My parking lot is becoming heavily fragmented. This is still not a problem as long as I have at least one parking spot for each car. In other words, I can still fulfill all of the allocation requests.
Later in the afternoon a tour bus shows up with people who want to eat lunch at my restaurant. A tour bus needs at least 8 parking spaces side by side to park. My parking lot has 48 spaces free but there no groups of more than 6 spaces side by side. My parking lot is too fragmented to meet this request.
The bus driver is denied a parking space. Not because my parking lot is too small for a bus. I even have enough total parking spaces to park a bus, they are just not in a contiguous block.
Technically, we’re all done with the analogy. But, if you really want to have some fun, add in some abandoned cars (memory leaks) and people who scrape or bump the other cars while pulling in or out and then just quietly just drive away (memory corruption).
To make the mental connection back to memory fragmentation simply replace the parking lot with your worker process’s virtual address space and replace the cars and the bus with memory allocations.
Now that you understand conceptually how memory gets fragmented, the following statements should make perfect sense to you. Fragmentation is a term generally used to describe the condition when the amount of a given resource as a whole is sufficient for a request to be satisfied, but there is not a large enough block of that resource to accommodate the entire request in one contiguous block. This is true of memory and also disk drives. Lots of people are familiar with disk fragmentation so I’ll explain the difference.
What separates the memory problem from the disk problem is that when a large enough contiguous block of disk space cannot be allocated, the file system just breaks the data into chunks and spreads it around, linking each chunk to the next so they form a chain of chunks. This type of fragmentation has a performance impact as the heads are forced to fly back and forth across the disk to hunt down the chunks, but everything still works.
The same is not true for memory. As a general rule, most programming languages that support direct memory addressing assume that memory allocations are contiguous. This means that if I can’t get a single contiguous chunk of memory of the size I requested then the only valid response is to deny my request. In the .NET world that response comes in the form of an OutOfMemoryException.
SharePoint lets you know when this happens by logging memory allocation failures in the ULS logs, the Application Event Logs and even to the client browser on occasion. If you believe you are getting these error as a result of fragmentation you should use Performance Monitor to examine the amount of Virtual Bytes being consumed by your worker process at the time of the error. If you are experiencing memory allocation failures when there is a significant amount of free memory available in the virtual address space, say more than 300MB, you MAY be suffering from fragmentation and should engage Microsoft Support for assistance. For the do-it-yourselfer, check out Yun Jin’s blog for a jump start.
There are some operations in SharePoint that require vast amounts of memory to complete. These are things like using STSADM to backup large site collections where the manifest for the backup may require several hundred megabytes or using the Download a Copy feature with very large files. These operations may fail even when there is very little fragmentation present because they require such large allocations. If you experience these types of failures you may wish to wait until off peak hours, recycle the worker process and then attempt them again immediately to take advantage of the pristine address space created by the recycle.
By recycling the worker process periodically we clear out all of those little allocations that that are breaking up the address space and start fresh. While fragmentation will eventually creep into my address space again, if I recycle the worker process at regular intervals it will probably not get bad enough to cause me problems.
Check back later when we'll talk about: Overlapped Recycling And SharePoint: What To Watch Out For