Mea Culpa (it's corrections time).

One of the rules in Tim Bray’s version of Sun’s blogging policy is “Write What You Know”.

Well, I should have listened to this when I posted my 3rd post, “So why does NT require such a wonking great big paging file on my machine”. I’ve since been gently corrected by the guys on the memory management team who really DO know how this stuff works, and there are a couple of really important mistakes in my post that need to be corrected.

The biggest one is in the title itself. NT doesn’t require a wonking great big paging file. In fact, NT works with a paging file that’s too small, or even without a paging file at all.

But, when you run without a paging file, you run certain risks. 

One of the things that the system keeps track of is the “system wide commit limit”. It’s composed of the system RAM plus the aggregate size of all the pagefiles on the system. If there’s no pagefile, then the commit limit is lower. When you allocate memory that requires commitment (some examples would be if you allocate memory with the MEM_COMMIT flag, or you create shared memory backed by the paging file), the memory is “charged” to the commitment limit. If there’s not enough room in the commit limit, then the memory allocation will fail. So your app won’t run until you either free up other memory (by shutting down running applications) or you increase the commit limit by increasing the pagefile space (or adding more RAM).

The other risk of running with too small a paging file occurs when NT attempts to create a dump file in the event of a bluescreen. If the paging file isn’t big enough to hold physical memory, then the dump file isn’t saved, and an opportunity to improve the system is lost. You could specify a minidump or a kernel-only dump, but won’t necessarily have all the information needed to debug the problem.

When NT runs setup, it chooses a default paging file size based on 1.5x memory (for certain sizes of RAM, the actual size chosen will be different, but this covers most of the case). This is a guess, but it’s a pretty good guess, especially since the cost of NOT getting it right is likely a call to PSS.

One of the people commenting on my blog asked: “Is there any reason to set the paging file greater than 4095 (ie. the maximum addressable memory space)? If so, why.” The answer to this is actually pretty straightforward: If the memory load on your system is more than 4G+physical RAM, then you will need to have a bigger paging file. If you’ve got 20 copies of autocad running on your machine, and each of them is sucking down 400M of memory, then you’re going to need 8G of paging file space to hold all those pages.

Btw, one of the tidbits that came out of my discussion with the MM guys was the maximum paging file size on a system:

·         On a regular x86 machine, you can have 16 paging files, each is 4G in size, for a total of 64G.

·         On an x86 running in PAE mode, you can have 16 paging files, but each can be 16tb in size, for a total of 256tb of paging file space.