Too Much Cache?


Cache is used to reduce the performance impact when accessing data that resides on slower storage media.  Without it your PC would crawl along and become nearly unusable.  If data or code pages for a file reside on the hard disk, it can take the system 10 milliseconds to access the page.  If that same page resides in physical RAM, it can take the system 10 nanoseconds to access the page.  Access to physical RAM is about 1 million times faster than to a hard drive.  It would be great if we could load up all the contents of the hard drive into RAM, but that scenario is cost prohibitive and dangerous.  Hard disk space is far less costly and is non-volatile (the data is persistent even when disconnected from a power source). 


 


Since we are limited with how much RAM we can stick in a box, we have to make the most of it.  We have to share this crucial physical resource with all running processes, the kernel and the file system cache.  You can read more about how this works here:


http://blogs.msdn.com/ntdebugging/archive/2007/10/10/the-memory-shell-game.aspx


 


The file system cache resides in kernel address space.  It is used to buffer access to the much slower hard drive.  The file system cache will map and unmap sections of files based on access patterns, application requests and I/O demand.  The file system cache operates like a process working set.  You can monitor the size of your file system cache’s working set using the Memory\System Cache Resident Bytes performance monitor counter.  This value will only show you the system cache’s current working set.  Once a page is removed from the cache’s working set it is placed on the standby list.  You should consider the standby pages from the cache manager as a part of your file cache.  You can also consider these standby pages to be available pages.  This is what the pre-Vista Task Manager does.  Most of what you see as available pages is probably standby pages for the system cache.  Once again, you can read more about this in “The Memory Shell Game” post.



 


Too Much Cache is a Bad Thing


The memory manager works on a demand based algorithm.  Physical pages are given to where the current demand is.  If the demand isn’t satisfied, the memory manager will start pulling pages from other areas, scrub them and send them to help meet the growing demand.  Just like any process, the system file cache can consume physical memory if there is sufficient demand. 


Having a lot of cache is generally not a bad thing, but if it is at the expense of other processes it can be detrimental to system performance.  There are two different ways this can occur – read and write I/O.



 


Excessive Cached Write I/O


Applications and services can dump lots of write I/O to files through the system file cache.  The system cache’s working set will grow as it buffers this write I/O.  System threads will start flushing these dirty pages to disk.  Typically the disk can’t keep up with the I/O speed of an application, so the writes get buffered into the system cache.  At a certain point the cache manager will reach a dirty page threshold and start to throttle I/O into the cache manager.  It does this to prevent applications from overtaking physical RAM with write I/O.  There are however, some isolated scenarios where this throttle doesn’t work as well as we would expect.  This could be due to bad applications or drivers or not having enough memory.  Fortunately, we can tune the amount of dirty pages allowed before the system starts throttling cached write I/O.  This is handled by the SystemCacheDirtyPageThreshold registry value as described in Knowledge Base article 920739: http://support.microsoft.com/default.aspx?scid=kb;EN-US;920739



 


Excessive Cached Read I/O


While the SystemCacheDirtyPageThreshold registry value can tune the number of write/dirty pages in physical memory, it does not affect the number of read pages in the system cache.  If an application or driver opens many files and actively reads from them continuously through the cache manager, then the memory manger will move more physical pages to the cache manager.  If this demand continues to grow, the cache manager can grow to consume physical memory and other process (with less memory demand) will get paged out to disk.  This read I/O demand may be legitimate or may be due to poor application scalability.  The memory manager doesn’t know if the demand is due to bad behavior or not, so pages are moved simply because there is demand for it.  On a 32 bit system, the file system cache working set is essentially limited to 1 GB.  This is the maximum size that we blocked off in the kernel for the system cache working set.  Since most systems have more than 1 GB of physical RAM today, having the system cache working set consume physical RAM with read I/O is less likely. 


This scenario; however, is more prevalent on 64 bit systems.  With the increase in pointer length, the kernel’s address space is greatly expanded.  The system cache’s working set limit can and typically does exceed how much memory is installed in the system.  It is much easier for applications and drivers to load up the system cache with read I/O.  If the demand is sustained, the system cache’s working set can grow to consume physical memory.  This will push out other process and kernel resources out to the page file and can be very detrimental to system performance.


Fortunately we can also tune the server for this scenario.  We have added two APIs to query and set the system file cache size – GetSystemFileCacheSize() and SetSystemFileCacheSize().  We chose to implement this tuning option via API calls to allow setting the cache working set size dynamically.  I’ve uploaded the source code and compiled binaries for a sample application that calls these APIs.  The source code can be compiled using the Windows DDK, or you can use the included binaries.  The 32 bit version is limited to setting the cache working set to a maximum of 4 GB.  The 64 bit version does not have this limitation.  The sample code and included binaries are completely unsupported.  It is just a quick and dirty implementation with little error handling.

Comments (31)

  1. anony.muos says:

    I’m getting “is not a valid Win32 application” error.

    [I’m guessing that you are running SetCache.exe on Windows XP (or earlier). The GetSystemFileCacheSize and SetSystemFileCacheSize API functions require Windows Server 2003 SP1 or later (this includes Windows XP x64 Edition, since it is built from the 2003 SP1 codebase). Since these functions don’t exist on earlier versions of Windows, the SetCache.exe binary was compiled with the subsystem version set to 5.02, which prevents it from running on versions of Windows where the API functions do not exist.]
  2. Good article. keep it up to keep us up to date.

  3. Mark Edwards says:

    Whilst coming out of L1/L2 cache may get you 20ns, coming from physical RAM is going to be between 80 and a 120ns on most systems (probably the latter), with the larger ones needing NUMA node traversal over a Xbar or AMD hyperlinks will also add to this number.

    I’ve not seen any systems going under 80 ns, but not done performance work on DDR3 yet so it may be possible outside of cache.

    However, your thought about access being a million times quicker is nearly right. Disk access on the big Arrays from EMC, Hitachi or HP would average (roundtrip) to be between 4 – 8 ms, if we average in cache utilization on their arrays.

    Taking 8ms and 80ns to make the sums easier it’s around 100,000 times quicker.

    This does bring up a point about the new flash drives coming out the spec is access will be less than 1 microsecond, and typically at the slower end of RAM, which means 150ns, so at worst case (1us) these drives are about 8000 times better than mechanical spindles with a big front side cache. Hence Hybrid technology and ReadyBoost are being used to improve latency. Flash drives are too expensive for most consumers so far, but in the Enterprise the tipping point for mass production is about a year away if assume price/capacity rates keep diminishing linearly for the respective technologies. A single fibre flash drive can saturate a 4Gb/s Fibre link, so capacity is there.

    This will give a very needed boost for many customers on the performance envelope.

  4. boyflex says:

    how do u find out how much cache is on your pc?

    (nice article learnt a bit keep i up)

    [If you want to see how much cache is currently being used, you can do so with Performance Monitor counter /Memory/Cache Bytes. If you want to see the limits, you can use the SetCache executable or !filecache in a postmortem or live debug. !filecache will also show you the current file cache usage along with how much cache each file is using.]
  5. David says:

    Very helpful.  However, setting max cache size to >= 2 GB does not work for me.  Tried on w2k3 server r2 and xp.

    [This is a limitation on 32 bit systems. With this API and tool you are setting the working set of the file cache. The file cache resides in the kernel address space. On 32 bit systems, you are limited to 2 GB of Kernel address space. Additionally the cache needs to share the kernel address space with other kernel resources, so your cache’s working set won’t even get up to 2 GB. Even though your cache’s working set is limited on 32 bit systems, many pages still may be “cached” on standby pages. If you use the 32 bit version on an x64 box, you can set the cache up to 4GB. At that point you reach the 32 bit SIZE_T limit (32 bits can only address up to 4 GB).]
  6. Steven Gill says:

    Fantastic!

    I have upgraded our backup server to W2k8x64 from w2k3x86 and it kept using all physical RAM, I even put 16GB in it and it used it all! this explains it all!

    I have managed to reduce the cache to 1GB, but can’t set it at more if I try anything over 1GB it sets to 8TB, e.g.:

    C:>setcache 2048

    Current Cache Settings:

    Minimum File Cache Size: 100 MBytes

    Maximum File Cache Size: 1024 MBytes

    Flags: 1

    New Cache Settings:

    Minimum File Cache Size: 100 MBytes

    Maximum File Cache Size: 8388607 MBytes

    Flags: 1

    Any ideas? otherwise I’ll take some of this expensive memory out if it can’t be used usefully!

    [Thanks for the feedback. It turns out there was a bug in the sample code. The bug reveals that I didn’t have a modern system with a lot of RAM to properly test this code. I’ve updated the code and the binaries. Try this new version.]
  7. fuzb says:

    Hello again – thanks for the updated setcache that works perfectly!

    I have another problem though, after a month or so of our backup server being on (no reboots or installs, just doing it’s normal backups as far as we can tell) the machine will reset the cache back to 8TB, at this point the machine grinds to a halt as all physical RAM is used and today it was so bad we had to actually press the power button because it wouldn’t respond to ctrl-alt-del or pslist etc.

    Any ideas what might cause that? there shouldn’t be a time limit on how long the cache is set should there?

    I have set up a scheduled task to run setcache daily and have to see how it behaves from now on.

    Many Thanks!

    [I don’t see a direct timer that would reset the max size. There could be some crazy set of events that could end up triggering a reset, but I cannot guess on a possible scenario. There is also the possibility that another application on the system is calling SetSystemFileCacheSize() and resetting the cache size. If scheduling a daily task to reset the max size is not enough for you, I recommend opening a support incident with us so that we can investigate this further.]
  8. Sam says:

    Very informative, thanks. But does the SetSystemFileCacheSize() work on Vista SP1?

    Here’s my test: on a system with 1.7 GB RAM, copy 1.5 GB of pictures, file size varies 5 – 15 MB.

    With default settings (no tweaks), Task Manager’s Physical Memory Cached grows while Free goes to 0.

    I ran your SetCache, setting max cache size to 512 MB – no change, Cached still grows to max.

    I set SystemCacheDirtyPageThreshold per the KB article, still no change.

    BTW, if I delete the copied files the Free memory immediately jumps up, indicating to me the system is still caching the files that were just written. Which isn’t necessarily bad, I just want the system to cache LESS of them!

    What really bugs me is my Vista box with 4GB RAM will use 3+ GB for cache while copying files and slowing everything else down.

    [These APIs will work with Vista SP1. The client requirements are XP x64 and Vista. For Server systems, you will need Server 2003 SP1 or Server 2008.

    Do not rely on Task Manager for these values. It is not telling you what you think it is. Please read the Memory Shell Game. Task Manager is telling you that you are using Standby Pages. They are like cached pages and very close to freed pages. In your example the 3+ GB is probably in standby pages. Since they are not being actively used by processes, the system is using them as pseudo cache pages. We don’t want memory to go unused.]

  9. egbvista says:

    Aug 29, 2008

    Re: Cache tuning API’s and Registry values

    The problem seems to be the one size fits all mentality of the operating

    system, not bad applications. There is no excellent algorithm for an

    operating system as broadly used as Windows, despite Mark Russinovich’s

    statement “…the Windows file copy engine tries to handle all scenarios

    well” [from:

    http://blogs.technet.com/markrussinovich/archive/2008/02/04/2826167.aspx].

    People should read Mark’s well written description of the efforts

    expended by Microsoft in improving the cache system. It seems they

    improve it in one place only to have bad performance immediately

    pop up somewhere else.

    The API’s mentioned in this article are dangerous because they are

    global and effect all applications on the system.  If the system is a 16

    core Datacenter, the ability to tune the system with these API’s is like

    using sledge hammer on ear rings.

    There will come a time when the Windows I/O subsystem will be

    redesigned. This will happen because of the amount of parallelism

    arriving with the many cores of systems in the near future will

    bring cache peformance, scalability and partitioning into focuse

    where these issues were not visible to the original Windows Operating

    system designers.

    Until that time, I think we all must endure the

    one-size-fits-all design of the Windows cache.

    I could be wrong, though.

    Just my 1 cent.

    Ed

    [There are many design challenges for a general use operating system. Windows isn’t a one trick pony. Seldom do servers only run one type of application. You have to throw in backups, management, and a host of other add-on services. While the Cache Manager does a good job for most of the usage scenarios, there are unique cases where it doesn’t work well. For the read I/O cache consumption on 64 bit case, the fundamental problem is a runaway working set. On 64 bit systems, working sets can grow larger than physical RAM on most servers. The default settings for the Memory Manager can’t handle processes that take more than their fair share of physical RAM. It can’t reliably determine whether one process deserves more pages than another. This is why administrators need to tune the server’s configuration. They can use things like WSRM to limit working set growth for individual processes or use the provided APIs to limit the working set of the system file cache. We are working on improving this experience in the next version of Windows. The changes are extensive and the risk of regression is far too high to backport to the current operating systems.]
  10. Excessive paging on Exchange 2007 servers when working sets are trimmed

  11. mike suding says:

    I used setCache.exe it on win2008 64bit enterprise to 2048MB and the “cached” value that shows in task manager (directly below memory guage) just keeps going higher than 2048MB. Mine is now 12,224MB!  Does anyone know what is going on?

    [Task Manager’s value for “Cached” is not what you think it is. In addition to the Cache Manager’s working set, this number includes the number of standby pages in physical RAM. While standby pages are like cached pages, they can be quickly disassociated with the previous working set, scrubbed and handed to a new process. Task Manager is just showing you another way of looking at the data. To see the real working set size of the System File Cache you need to use Performance Monitor. You can read more about this in the Memory Shell Game post.]
  12. Nihility says:

    Using the sample code, is it possible to set the minimum cache size? It defaults to 100MB instead of the original 1MB.

    [By default the sample code is hard coded to set the minimum to 100 MB, but not enforce it. This was done to allow the memory manager to reduce the working set size as needed. You can modify the sample code to set a hard limit by changing the Flags parameter in the call to SetSystemFileCacheSize().]
  13. TanMan says:

    This reads and acts like a Russinovich post and tool – easily understandable, educational, small, fast, and useful. Thank you!

    I have Vista Home Premium x64 with 4GB RAM. 1GB max cache on a 4GB system seems much more reasonable than the default of 8.4 TB!

    Do I need to reboot in order for the new cache setting to take effect?

    [Thanks for the feedback. In response to your question a reboot is not required. The setting is dynamically applied to the system file cache’s working set size. It is important to note this setting is not persistent so rebooting the machine will revert the size back to the default. In order to maintain the settings you’ll need to run the tool at least once per boot. One approach is the use of a machine start up script to automate the process after a reboot.]
  14. Excessive cached read I/O is a growing problem. For over one year we have been working on this problem

  15. Rilasciato Microsoft Windows Dynamic Cache Service

  16. GeirW says:

    Tool works fine on 2008 x64 in that it commits the change immediately after running the command. However the setting doesn’t stick after a server reboot but defaults back to 8386607MB. Is there a way to make the setting permanent or do we have to resolve to running the tool during the Windows startup sequence?

    [Good question. SetCache has been replaced by the Microsoft Windows Dynamic Cache Service. You read more about it here. These settings are not persistent and will revert to default values when the system starts. You can either use SetCache in a local system startup script or use the Microsoft Windows Dynamic Cache Service. The new service will auto-start with the system and set the cache limit based on many configurable options.]
  17. CecoM says:

    Thank you very much for your post. I am so sory not found it 2 years ago… I am trying to donload binaries with no success. Tried several ISP’s. Please, help?

  18. Nomgle says:

    CecoM, this has now been replaced with Microsoft Windows Dynamic Cache Service – grab it from http://www.microsoft.com/downloads/details.aspx?FamilyID=e24ade0a-5efe-43c8-b9c3-5d0ecb2f39af&displaylang=en

  19. vincentk says:

    I just tried to install this tool on Windows 2008 R2 but it failed to start with the notification that the tool was written for an earlier version of Windows.

    Will there be an update soon or is there a tool provided within R2 that manages the cache size?

    [ Thanks for the great question! The SetCache tool is not an installable tool. Additionally it has been replaced by the Microsoft Windows Dynamic Cache Service. Either way, these tools help to mitigate the problem of excessive growth of the system file cache on versions of Windows prior to Windows 7 and Windows Server 2008 R2. We have updated the memory manager algorithms in Windows 7 and Windows Server 2008 R2 to address this issue natively in the Operating System. You should not use these tools on the latest version of Windows. ]
  20. Updated Comments with Responses from Somak says:

    I wanted to get some clarification after reading the post related to the Windows Dynamic Cache Service. I am running windows 2008 R2 NFS server. Our workload performs heavy I/O operations to large files. We are running out of memory during peak workloads. You (Team) Mentioned that some architectural changes to memory management in R2 may address these issues Will this service help my situation? does it even apply to R2? If so your help and guidance would be appreciated as i truly believe this to be our issue! P.S – I need your help bad if im going to solidify Windows continued use for our application…please help!

    (Question from another blog reader) I am seeing similar issues on Windows 7. What should I do to try to address/investigate such an issue since they are not supposed to exist anymore?

    [ Windows 7 and 2008 R2 has several Memory Manager changes that should help mitigate this problem. That being said, don’t expect that the cache won’t utilize a fair share of physical RAM if there is a lot of demand for cached I/O. It may use quite a bit of physical RAM, but not at the expense of other processes and it shouldn’t completely deplete available memory. Some process working set trimming may occur, but that comes from really old pages that should be repurposed. The other thing to keep in mind is that under R2 you may not be experiencing a cache consumption problem, but an I/O bottleneck. If your disks are saturated with normal priority I/O, the rest of the system will operate slowly as the I/O will contend with the cached reads and writes. You need to carefully analyze the perfmon logs from this scenario in order to determine what is going on. Please contact Microsoft Support if you need assistance in reviewing the perfmon logs. ]

    Hello, We recently updated from Windows 2008 to 2008 R2 and we see different system cache behavior. We have a memory intensive application which is running on a dedicated server by its own. 2008 R2 does not let it use up the entire RAM of the server, it limits the system cache to around 50% of total physical memory. This creates a big problem for us because we are using iSCSI and the data used to fill up the system cache entirely with the previous version of Windows, and it doesn’t happen any more with R2. We’ve tried using the SetSystemFileCacheSize API but it doesn’t seem to have any effect whatsoever to increase the maximum cache size. This is pretty important for us, so any help would be hugely appreciated. Thank you, Andrei Alecu

    [ Other than the size of the System File Cache’s working set, are you seeing a performance difference between 2008 and 2008 R2? When you say that you have a memory intensive application, are you referring to your process’ working set? If so then using the APIs won’t affect your process’ working set. The APIs are only used to set limits on the System File Cache’s working set. Also note, that you are just looking at the size of the working set for the System File Cache. Pages that are removed from the working set are placed on the standby list and can be easily soft faulted back into the working set. If you are moving a lot of pages through the cache, chances are that your standby list will be mostly full of cached pages. The “Cache” isn’t limited to the size of its working set. ]

    “Could not start DynCache service on Local Computer. Error 216 (0xd8)” This is on a a Windows Server 2003 R2 Enterprise x64 box at SP2.

    [ You can use the debug build (included) and Debug View to see exactly why the service is exiting. Also make sure that you are using the AMD 64 bit version of the executable. ]

    I have IA64 server with Windows 2003 DC edition and Oracle is running. is it a better idea to use Microsoft Windows Dynamic Cache Service in my enviornment?

    [ If you have applications that perform a lot of cache read I/O, you can benefit from this service. This would include applications that backup the Oracle database using file copies rather than Oracle interfaces. You can also benefit from this service is your admins could be copying large or many files off of the server. ]

    We are having the page out issue when copying a large backup between to servers. Both are W2k8 x64 with SQL 2k8. One has 2 instances and the other is a cluster. The copy is from the cluster, using a SAN, to the other machine. To top it off, a virus scan is running on both servers, but it is configured to exclude the recommeded SQL files. The error occurs on both servers. If it turns out the cache is responsible, how should it be addressed? Seems like I have two issues. One, can the service watches two instances. The other, is the service okay on a cluster? Any advice? Will there be a hotfix that addresses this? Thanks.

    [ You need to use Perfmon to monitor Available Memory and the size of the system file cache. If during your test, you see the system file cache grows to overtake all of physical RAM, then you should use this service. The service only works on one server. You need to install and configure it on each of your servers. This service will work on clustered servers. There will be no hotfix for this issue. If your cache is consuming physical RAM, you need to use the provided APIs. If you want to save time and do not want to write your own solution, you can use the Dynamic Cache Service to help mitigate this problem. ]

    Hello, On windows 7 X64 i notice that when a large file is passed through the “write cache” the file gets loaded into memory completely. This becomes a problem with files larger than the available ram number (the system becomes sluggish) Is there any way to limit the maximum cache size on windows 7 to prevent the write cache from taking up all available ram?

    [ You can use the GetSystemFileCacheSize() and SetSystemFileCacheSize() APIs on Windows 7 to limit the size of the System File Cache’s working set. The Dynamic Cache Service is one example of how to use these APIs, but there is a hard coded check for Windows 7 in the service. If you absolutely must use this service on Windows 7, you need to modify the code to remove the check. Then the service can work on Windows 7/2008 R2. Before you do this, you really should review a perfmon log of the problem. Do not rely on Task Manager for this problem. The cache may consume a lot of physical RAM, but it should not completely consume it. Some working set trimming may occur, but that is from really old pages that need to be repurposed. Before taking drastic measures, you need to verify that you do have a caching problem and not a disk I/O bandwidth problem. ]

    I am using Vista Ultimate 64-bit on a Q6600 CPU with 8GB of RAM installed. I have added the amd64 version of DynCache.exe to my system and created the service and registry entries. I have set: MaxSystemCacheMBytes = 2048 and it is making no difference. If I copy a large file from a network drive, Task Manager reports the entire available amount of Free memory disappears and the Cached counter climbs to the limit. How can I limit this? Please tell me there is a way. After copying a file like this, the system is so slow to do anything else. As an aside, has this issue been fixed in Win7 and if so, how is it configured? Same registry entries? Thanks to anyone who can help! -Michael

    [ Don’t use Task Manager to troubleshoot this problem. Task Manager combines standby pages and the system file cache’s working set together and reports it as Cached Pages. Standby Pages are like cached pages in that they can be quickly soft-faulted back into the process’ working set, so this reporting is accurate. They are also like available memory because they can be quickly zeroed and given to another process. You should use Perfmon to see what the actual size of the system file cache is. If it is still exceeding the limit you set, use the included debug version and Debug View to see what the service is really doing. This issue has been greatly mitigated in Windows 7. We cannot backport the changes to Vista. For pre-Windows 7 operating systems, you need to use the provided APIs. The Dynamic Cache Service is an example of a solution using the provided APIs. ]

    I thank you for this post. Accidentially even the Microsoft Windows Dynamic Cache Service seems to solve my problem. I tried different settings whitout success. When I copy large files ~ 10GB from a quick disk to a slow disk (e.g. from 12-disk-raid to single SATA-Disk, or USB external disk), my system stalls for a while, most running applications do not respond any more. When I open taskmanager, I can see, that the amount of free ram goes to zero within seconds, then the problems begin. Probably this are even standby-pages, but why does this cause such behaviour? I found at microsoft a hotfix (920739) for Server2k3, which describes exactly my problem but I use server2k8, and this won´t fit. Don’t use Task Manager to look at your memory counters. Task Manager is good for a quick reference, but not good for troubleshooting a specific scenario. Also, don’t rely solely on Free Pages. Your system could have available pages on the standby list. These pages can be easily converted to free, then zeroed then given to another process. You should use Perfmon to view your memory counters. Check out this post for more information: http://blogs.msdn.com/ntdebugging/archive/2007/10/10/the-memory-shell-game.aspx

    [Please don’t use Task Manager to look at your memory counters. Task Manager is good for a quick reference, but not good for troubleshooting a specific scenario. Also, don’t rely solely on Free Pages. Your system could have available pages on the standby list. These pages can be easily converted to free, then zeroed then given to another process. You should use Perfmon to view your memory counters. Check out this post for more information: http://blogs.msdn.com/ntdebugging/archive/2007/10/10/the-memory-shell-game.aspx Also, if you think that the Dynamic Cache Service is not working, you can use the debug build and DebugView to see exactly what it is doing. . ]
  21. Greg Galloway says:

    I'm a SQL Server Analysis Services MVP, and I'm very interested in the interaction between the system file cache and SSAS (as it leverages the system file cache heavily). Do you know any experts on that topic from Microsoft that I should ping? Two quick questions for you. I've written some C# code that lets you clear the windows system file cache. The main reason for this is to repeatably retest the performance of an Analysis Services query on a completely cold system file cache (without server reboot). Two questions: 1. Is uses NtSetSystemInformation. Is this doing anything different than SetSystemFileCacheSize? You can see the code here: asstoredprocedures.svn.codeplex.com/…/FileSystemCache.cs 2. From your article, it appears that limiting the system file cache doesn't zero the system file cache memory that's trimmed, but rather moves it to standby. Is there an API (or any other way) of clearing standby memory in the system file cache? That code mentioned above isn't producing repeatable cold system file cache tests, and I believe it's because soft faults from standby are so much faster than hard faults.

    [SetSystemFileCacheSize() internally uses NtSetSystemInformation().  I would recommend using SetSystemFileCacheSize() over NtSetSystemInformation() because SetSystemFileCacheSize() is a public API.  While you can use NtSetSystemInformation(), you run a greater risk of your application breaking if we change the interface.

    If you want to clear out physical RAM without a server reboot, I would recommend creating an application that will consume most of available memory and then dump the pages onto the free list.  First get the current File System Cache’s working set size, then set the limit to something fairly low (but not too low).  Next find out how much available memory is on the system, and then allocate that much memory in your process (leave about 64 MBs free).  Write at least one byte per page to guarantee that it will be committed to your process’s working set.  Finally restore the System File Cache’s work set to where it was and then exit the process.]

  22. FJ says:

    I'm not able to start the DynCache service on my Windows Server 2008 SP2 64-bit. Struggling with these for some time now. Please help!!!

    —————————

    Services

    —————————

    Windows could not start the Dynamic Cache Service service on Local Computer.

    Error 216: 0xd8

    —————————

    OK  

    —————————

  23. On Windows 7 SP1 I had to hardcode into the LimitCache function:

    if(MaxCacheSize>512*1024*1024)

    MaxCacheSize=512*1024*1024;

    and it now works:) (upper limit is 512MBytes, as reported by SysInternals CacheSet, and once it reached this limit, it didn't cross it).

  24. Hello, very interesting article.

    I am using Windows Server 2008 R2 and am seeing this runaway file cache issue consuming all of the available physical RAM.

    My application does a ton of random access reads and writes.

    What were the changes to the memory manager between Windows 2008 and 2008 R2?

    I am curious since the runaway cache problem is still there.

    What is your recommendation for dealing with it on Windows 2008 R2?

    [The cache manager in Windows Server 2008 R2 handles almost all scenarios more efficiently, and usually avoids the need for dyncache.  Unless you have many individual files open, the cache manager should not encounter the scenario described in this article on R2.  It is difficult to provide 1:1 support through blog comments, if you need troubleshooting assistance you may want to open a support incident so that our engineers can assist you.]

  25. Ugh, this is terribad. With the current Steam sale lots of downloading is being done. While Steam is downloading 'Cache WS' in Process Explorer is constantly growing. I saw 3GB Cache WS on the 4GB system and instead of discarding this clearly useless memory, it's starting to swap running programs to disk, aarrgh. It's currently so bad I just run Cacheset.exe 1024 1024 half-hourly. Not that it would be stick to anywhere near 1024KB but at least it clears the cache instantly.

    [You may benefit from the service described in this article: http://blogs.msdn.com/b/ntdebugging/archive/2009/02/06/microsoft-windows-dynamic-cache-service.aspx.]

  26. Lonny Niederstadt says:

    Is SystemCacheDirtyPageThreshold still relevant for Windows Server 2012?

    Thanks!

    [That setting is only relevant on Windows Server 2003 SP2, or SP1 with KB920739 installed.  For more information refer to http://support.microsoft.com/kb/920739.]

  27. Lowell says:

    On one of your posts I see the below response-

    { Thanks for the great question! The SetCache tool is not an installable tool. Additionally it has been replaced by the Microsoft Windows Dynamic Cache Service. Either way, these tools help to mitigate the problem of excessive growth of the system file cache on versions of Windows prior to Windows 7 and Windows Server 2008 R2. We have updated the memory manager algorithms in Windows 7 and Windows Server 2008 R2 to address this issue natively in the Operating System. You should not use these tools on the latest version of Windows. ]

    However I am experiencing the MetaFile utilizing over 90% of my RAM on Windows Server 2008 R2 w/ SP1 that has 16GB of RAM, should I install the Windows Dynamic Cache Services? – http://www.microsoft.com/…/details.aspx

    I also question this as the ReadMe file included the the Dynamic Cache Service zip states the below –

    This service will only run on Windows Server 2008 R2 or earlier versions of Windows.  Do not attempt to run this service on a version of Windows after Windows Server 2008 R2 as it will most likely cause performance problems.

    [Please refer to this article for information on dyncache: http://blogs.msdn.com/b/ntdebugging/archive/2009/02/06/microsoft-windows-dynamic-cache-service.aspx]

  28. Edisson says:

    I'm trying to run this service on Windows Server 2008 R2 (64 bit) but have been unable to make it work. Should I use the the Dyncache.exe under the I386 folder or the one under AMD64? If I use the 64 bit one, should that go under the C:WindowsSysWOW64 or should it be under the System32 folder?

    [The answer to your question depends on which 64-bit version of Windows you are using.  If you are using x64, then use the version in the amd64 folder and put it in the system32 folder.  Note that AMD64 refers to the 64-bit extended x86 architecture, and is not specific to hardware manufactured by AMD.  If you are using the Itanium version of Windows, use the version in the ia64 folder and put it in system32.]

  29. Alexander Riccio says:

    The link to the source code seems to be dead.

    [This tool is obsolete and was replaced by dyncache.  The source for dyncache is included in the download package.  http://blogs.msdn.com/b/ntdebugging/archive/2009/02/06/microsoft-windows-dynamic-cache-service.aspx]

  30. elango says:

    What it means:

    The system clock varies too much (v=662.91)

    [This text does not appear in the article, unfortunately we do not know what it means either.]

  31. Llexx says:

    RegValue: MaxSystemCacheMBytes

    Type: REG_DWORD

    Values:

    0 = Limit to 90% of Physical RAM (default)

    1-99 = Limit the maximum size of the System File Cache to this percentage of Physical RAM

    > 200 = Limit the maximum size of the System File Cache to x Mbytes

    How this works:

    This setting is the absolute maximum that the System File Cache’s working set could be set to.  The default is 0, limiting it to 90% of physical RAM with an upper limit of total Physical RAM minus 300 Mbytes.  The lower limit for absolute values is 200 Mbytes and it must be at least 100 Mbytes greater than the MinSystemCacheMBytes value, which defaults to 100 Mbytes.

    [It is good to see someone is reading the readme file.]

Skip to main content